CN110335211B - Depth image correction method, terminal device and computer storage medium - Google Patents
Depth image correction method, terminal device and computer storage medium Download PDFInfo
- Publication number
- CN110335211B CN110335211B CN201910550733.5A CN201910550733A CN110335211B CN 110335211 B CN110335211 B CN 110335211B CN 201910550733 A CN201910550733 A CN 201910550733A CN 110335211 B CN110335211 B CN 110335211B
- Authority
- CN
- China
- Prior art keywords
- depth information
- color image
- target object
- pixel
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
Abstract
本申请实施例公开了一种深度图像的校正方法、终端设备以及计算机存储介质,该方法包括:获取目标对象对应的原始图像以及目标对象对应的主彩色图像和副彩色图像;其中,原始图像是根据飞行时间TOF传感器对目标对象的采集得到的,主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;根据主彩色图像和副彩色图像,利用预设双摄算法确定目标对象对应的第一深度信息和第一置信度;根据原始图像,确定目标对象对应的第二深度信息;基于第一深度信息、第二深度信息以及第一置信度,确定第一深度信息中的错误数据区域;通过第二深度信息以及主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据目标深度信息得到深度图像。
The embodiment of the present application discloses a depth image correction method, a terminal device, and a computer storage medium. The method includes: acquiring an original image corresponding to a target object and a main color image and a sub-color image corresponding to the target object; wherein the original image is According to the acquisition of the target object by the TOF sensor, the main color image and the sub-color image are obtained from the acquisition of the target object by the dual cameras; according to the main color image and the sub-color image, the preset dual-camera algorithm is used to determine the target object. Corresponding first depth information and first confidence; according to the original image, determine the second depth information corresponding to the target object; based on the first depth information, the second depth information and the first confidence, determine the error in the first depth information data area; correcting the wrong data area by using the second depth information and the main color image to obtain target depth information, and obtaining a depth image according to the target depth information.
Description
技术领域technical field
本申请涉及图像处理技术领域,尤其涉及一种深度图像的校正方法、终端设备以及计算机存储介质。The present application relates to the technical field of image processing, and in particular, to a depth image correction method, a terminal device, and a computer storage medium.
背景技术Background technique
随着智能终端的迅速发展,手机、掌上电脑、数码相机、摄像机等终端设备已经成为用户生活中必不可少的一种工具,并且为用户生活的各个方面带来了极大的便捷。现有的终端设备基本上都有拍照功能,用户可以使用终端设备拍摄各种各样的图像。With the rapid development of smart terminals, terminal devices such as mobile phones, handheld computers, digital cameras, and video cameras have become an indispensable tool in users' lives, and have brought great convenience to all aspects of users' lives. Existing terminal devices basically have a photographing function, and users can use the terminal device to shoot various images.
在拍摄具有虚化效果的图像时,通常需要终端设备配置双摄像头。通过双摄像头获取深度(depth)信息,虽然结构简单、硬件功耗低、分辨率高,但是仍然存在缺陷,比如对于无纹理、重复纹理、过曝光、欠曝光等场景的适应性较差,导致所获取深度信息的准确性偏低,从而影响了人像虚化的效果。When shooting an image with a blur effect, the terminal device is usually required to be equipped with dual cameras. Obtaining depth information through dual cameras, although the structure is simple, the hardware power consumption is low, and the resolution is high, there are still defects, such as poor adaptability to scenes without texture, repeated texture, overexposure, underexposure, etc., resulting in The accuracy of the acquired depth information is low, which affects the effect of portrait blurring.
发明内容SUMMARY OF THE INVENTION
本申请的主要目的在于提出一种深度图像的校正方法、终端设备以及计算机存储介质,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而提升了双摄人像模式下depth的准确性,进而优化了人像虚化的准确性。The main purpose of this application is to propose a depth image correction method, a terminal device and a computer storage medium, which can repair the phenomenon of depth errors in areas such as no texture, repeated texture, overexposure, underexposure, etc. Thereby, the accuracy of depth in dual-camera portrait mode is improved, and the accuracy of portrait blurring is further optimized.
为达到上述目的,本申请的技术方案是这样实现的:In order to achieve the above-mentioned purpose, the technical scheme of the present application is achieved in this way:
第一方面,本申请实施例提供了一种深度图像的校正方法,所述方法包括:In a first aspect, an embodiment of the present application provides a method for correcting a depth image, the method comprising:
获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据飞行时间TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;Obtain the original image corresponding to the target object and the main color image and the sub-color image corresponding to the target object; wherein, the original image is obtained according to the acquisition of the target object by the TOF sensor, and the main color image and the sub-color image are obtained. The image is obtained according to the acquisition of the target object by the dual cameras;
根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;According to the primary color image and the secondary color image, the preset dual-camera algorithm is used to determine the first depth information and the first confidence level corresponding to the target object; according to the original image, the first depth information corresponding to the target object is determined. 2. depth information;
基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;determining an erroneous data region in the first depth information based on the first depth information, the second depth information and the first confidence level;
通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像。Correction processing is performed on the erroneous data area by using the second depth information and the main color image to obtain target depth information, and a depth image is obtained according to the target depth information.
第二方面,本申请实施例提供了一种终端设备,所述终端设备包括:获取单元、确定单元和校正单元,其中,In a second aspect, an embodiment of the present application provides a terminal device, where the terminal device includes: an acquisition unit, a determination unit, and a correction unit, wherein,
所述获取单元,配置为获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;The acquisition unit is configured to acquire the original image corresponding to the target object and the main color image and the sub-color image corresponding to the target object; wherein, the original image is obtained according to the acquisition of the target object by the TOF sensor, and the main color image and the sub-color image corresponding to the target object are obtained; The color image and the sub-color image are obtained according to the acquisition of the target object by the dual cameras;
所述确定单元,配置为根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;以及还配置为基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;The determining unit is configured to use a preset dual-camera algorithm to determine the first depth information and the first confidence level corresponding to the target object according to the primary color image and the secondary color image; second depth information corresponding to the target object; and further configured to determine an erroneous data region in the first depth information based on the first depth information, the second depth information and the first confidence level;
所述校正单元,配置为通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像。The correction unit is configured to perform correction processing on the erroneous data area by using the second depth information and the main color image to obtain target depth information, and obtain a depth image according to the target depth information.
第三方面,本申请实施例提供了一种终端设备,所述终端设备包括:存储器和处理器;其中,In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes: a memory and a processor; wherein,
所述存储器,用于存储能够在所述处理器上运行的计算机程序;the memory for storing a computer program executable on the processor;
所述处理器,用于在运行所述计算机程序时,执行如第一方面所述的深度图像的校正方法。The processor is configured to execute the depth image correction method according to the first aspect when running the computer program.
第四方面,本申请实施例提供了一种计算机存储介质,所述计算机存储介质存储有深度图像的校正程序,所述深度图像的校正程序被至少一个处理器执行时实现如第一方面所述的深度图像的校正方法。In a fourth aspect, an embodiment of the present application provides a computer storage medium, where the computer storage medium stores a depth image correction program, and the depth image correction program is implemented as described in the first aspect when the depth image correction program is executed by at least one processor The correction method of the depth image.
本申请实施例所提供的一种深度图像的校正方法、终端设备以及计算机存储介质,通过获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;然后根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;再基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;最后通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像;这样,通过第二深度信息对第一深度信息进行优化,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而提升了双摄人像模式下depth的准确性;另外,该目标深度信息主要用于对主彩色图像的虚化处理,还可以优化人像虚化的准确性,提升了人像虚化的效果。A method for correcting a depth image, a terminal device, and a computer storage medium provided by the embodiments of the present application obtain an original image corresponding to a target object and a main color image and a sub-color image corresponding to the target object; wherein, the original image The image is obtained according to the acquisition of the target object by the TOF sensor, and the main color image and the sub-color image are obtained according to the acquisition of the target object by the dual cameras; Suppose a dual-camera algorithm determines the first depth information and the first confidence level corresponding to the target object; according to the original image, determine the second depth information corresponding to the target object; and then based on the first depth information, the The second depth information and the first confidence level are used to determine the erroneous data area in the first depth information; finally, the erroneous data area is corrected through the second depth information and the main color image to obtain Target depth information, and obtain a depth image according to the target depth information; in this way, by optimizing the first depth information through the second depth information, the depth information in the dual-camera portrait mode can be repaired without texture, repeated texture, overexposure, and underexposure. This improves the accuracy of depth in dual-camera portrait mode; in addition, the target depth information is mainly used for the blurring of the main color image, and it can also optimize the accuracy of portrait blurring and improve the Portrait blur effect.
附图说明Description of drawings
图1为本申请实施例提供的一种TOF相机的爆破式结构示意图;FIG. 1 is a schematic view of a blasting structure of a TOF camera provided by an embodiment of the application;
图2为本申请实施例提供的一种双摄虚化流程的结构示意图;FIG. 2 is a schematic structural diagram of a dual-camera blurring process provided by an embodiment of the present application;
图3为本申请实施例提供的一种深度图像的校正方法的流程示意图;3 is a schematic flowchart of a method for correcting a depth image according to an embodiment of the present application;
图4为本申请实施例提供的一种终端设备的硬件结构示意图;FIG. 4 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application;
图5为本申请实施例提供的一种极线校正的效果对比示意图;5 is a schematic diagram of the comparison of the effects of a kind of polar line correction provided by an embodiment of the present application;
图6为本申请实施例提供的一种双摄视差计算的效果示意图;FIG. 6 is a schematic diagram of the effect of a dual-camera parallax calculation provided by an embodiment of the present application;
图7为本申请实施例提供的一种计算深度信息的模型示意图;7 is a schematic diagram of a model for calculating depth information according to an embodiment of the present application;
图8为本申请实施例提供的一种深度图像的校正方法的详细流程示意图;FIG. 8 is a detailed schematic flowchart of a method for correcting a depth image provided by an embodiment of the present application;
图9为本申请实施例提供的一种人像虚化的效果对比示意图;FIG. 9 is a schematic diagram of a comparison of the effect of blurring a portrait provided by an embodiment of the present application;
图10为本申请实施例提供的一种终端设备的组成结构示意图;FIG. 10 is a schematic diagram of the composition and structure of a terminal device provided by an embodiment of the present application;
图11为本申请实施例提供的另一种终端设备的硬件结构示意图。FIG. 11 is a schematic diagram of a hardware structure of another terminal device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
近年来,由于飞行时间(Time of Flight,TOF)技术的飞速发展,人们对光学的研究越来越深入。TOF作为一种三维(Three Dimension,3D)成像技术被广泛应用到比如智能手机、掌上电脑、平板电脑、数字相机等终端设备中,可以实现距离测量、三维建模、拍照虚化以及体感游戏等应用中,还可以配合增强现实(Augmented Reality,AR)技术来实现AR眼镜的相关应用。In recent years, due to the rapid development of Time of Flight (TOF) technology, people's research on optics has become more and more in-depth. As a three-dimensional (Three Dimension, 3D) imaging technology, TOF is widely used in terminal devices such as smart phones, PDAs, tablet computers, digital cameras, etc., which can realize distance measurement, 3D modeling, photo blur and somatosensory games, etc. In the application, related applications of AR glasses can also be implemented with augmented reality (Augmented Reality, AR) technology.
一般而言,TOF相机可以由光发射模块和光接收模块构成。其中,光发射模块也称为激光发射器、TOF发射器或发射照明模块等,光接收模块也称为探测器、TOF接收器或感光接收模块等。具体地,光发射模块发出经调制的近红外光,遇到被拍摄物体后反射,然后由光接收模块来计算光线发射和反射的时间差或相位差,再将其进行换算得到被拍摄物体的距离,从而产生深度信息。Generally speaking, a TOF camera can be composed of a light emitting module and a light receiving module. Among them, the light emitting module is also called a laser transmitter, a TOF transmitter or an emission lighting module, etc., and the light receiving module is also called a detector, a TOF receiver or a photosensitive receiving module, and the like. Specifically, the light emission module emits modulated near-infrared light, which is reflected after encountering the object to be photographed, and then the light receiving module calculates the time difference or phase difference between light emission and reflection, and then converts it to obtain the distance of the object to be photographed. , resulting in depth information.
参见图1,其示出了本申请实施例提供的一种TOF相机的爆破式结构示意图。如图1所示,TOF相机10包括光发射模块110和发光接收模块120;其中,光发射模块110是由柔光镜(diffuser)、光电二极管(Photo-Diode,PD)、垂直腔面发射激光器(Vertical CavitySurface Emitting Laser,VCSEL)和陶瓷封装体等组成;光接收模块120是由镜头、940nm窄带滤光片和TOF传感器(TOF Sensor)等组成。本领域技术人员可以理解,图1中示出的组成结构并不构成对TOF相机的限定,TOF相机可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Referring to FIG. 1 , it shows a schematic diagram of an explosive structure of a TOF camera provided by an embodiment of the present application. As shown in FIG. 1 , the
可以理解地,根据获取的信号结果不同,TOF可以分为直接飞行时间(Direct-TOF,D-TOF)和间接飞行时间(Indirect-TOF,I-TOF)。其中,D-TOF获取的是时间差,I-TOF获取的是不同目标返回信号的相位偏移(例如,不同相位内电荷或者电压的比重),以此来计算被拍摄物体的距离,产生深度信息。It can be understood that TOF can be divided into direct time-of-flight (Direct-TOF, D-TOF) and indirect time-of-flight (Indirect-TOF, I-TOF) according to different obtained signal results. Among them, D-TOF obtains the time difference, and I-TOF obtains the phase offset of the return signals of different targets (for example, the proportion of charge or voltage in different phases), so as to calculate the distance of the photographed object and generate depth information. .
另外,根据调制方式不同,I-TOF又可以分为脉冲调制(Pulsed Modulation)方案和连续波调制(Continuous Wave Modulation)方案。目前大多数厂商的终端设备使用的主流方式是连续波调制的间接TOF方案(可以用CW-I-TOF表示)。对于CW-I-TOF方案而言,每个像素中包含有2个电容,光发射模块发射出4段方波脉冲,脉冲周期为△t;而光接收模块在接收该脉冲时存在相位延迟,每个窗口相位延迟90°,即四分之一的脉冲周期(用△t/4表示),这样相位延迟分别为0°、180°、90°和270°,也称为四相位法。在曝光期间,每个像素的两个电容轮流充电,曝光时间均等,两个电容的曝光量之差分别记录为Q1、Q2、Q3和Q4;利用电荷差异与飞行相位的关系,可以计算出相位差通过该相位差再换算得到被拍摄物体的距离D;其中,In addition, according to different modulation methods, I-TOF can be further divided into a pulsed modulation (Pulsed Modulation) scheme and a continuous wave modulation (Continuous Wave Modulation) scheme. At present, the mainstream method used by the terminal equipment of most manufacturers is the indirect TOF scheme of continuous wave modulation (which can be represented by CW-I-TOF). For the CW-I-TOF scheme, each pixel contains 2 capacitors, and the light-emitting module emits 4-segment square wave pulses with a pulse period of Δt; while the light-receiving module has a phase delay when receiving the pulses, each The window phase is delayed by 90°, that is, a quarter of the pulse period (represented by Δt/4), so that the phase delays are 0°, 180°, 90° and 270°, also known as the four-phase method. During exposure, the two capacitors of each pixel are charged alternately, the exposure time is equal, and the difference between the exposure amounts of the two capacitors is recorded as Q1, Q2, Q3, and Q4, respectively; using the relationship between the charge difference and the flight phase, the phase can be calculated Difference The distance D of the object to be photographed is obtained by converting the phase difference; wherein,
当被拍摄物体的距离所对应的角度超过2π时,则需要两个频率不同的相位来求解出真实的距离。假定获取的两个相位值分别用和表示,将扩展为将扩展为那么将会存在一个真实的距离以使得两者对应的距离差最小,从而可以确定出真实的距离。When the angle corresponding to the distance of the object to be photographed exceeds 2π, two phases with different frequencies are required to solve the real distance. It is assumed that the two phase values obtained are respectively and said, will expands to Will expands to Then there will be a real distance to minimize the corresponding distance difference between the two, so that the real distance can be determined.
TOF作为主动式深度传感器,已经广泛应用于手机等终端设备上,比如某厂商将其作为后置深度传感器的方案。但TOF传感器的分辨率低,并不能直接适应虚化、抠图等对前景边缘准确性要求很高的应用。因此,目前还是以双摄方案为主。As an active depth sensor, TOF has been widely used in terminal devices such as mobile phones. For example, a manufacturer uses it as a solution for a rear depth sensor. However, the TOF sensor has a low resolution and cannot directly adapt to applications that require high accuracy of foreground edges, such as blurring and matting. Therefore, the dual-camera solution is still the main solution.
双摄方案可以包括主摄像头和副摄像头,主摄像头和副摄像头均可以为RGB摄像头,其中,RGB代表了红(Red,R)、绿(Green,G)、蓝(Blue,B)三个通道的颜色,这三个通道的颜色按照不同的比例混合或叠加,可以得到图像中人类视力所感知的所有颜色。双摄方案作为人像虚化应用已经成为终端设备的一个标配功能,双摄虚化流程如图2所示,通过双摄标定、极线校正、立体匹配和散景虚化等四个步骤,从而完成了双摄虚化的功能。然而,通过双摄方案获取深度信息虽然优点较多,比如结构简单、硬件功耗低、深度信息的分辨率高,而且还可以兼顾室内及室外的大部分场景;但是双摄方案对于无纹理、重复纹理、过曝光、欠曝光等场景的适应性较差,使得双摄方案在无纹理、重复纹理、、过曝光、欠曝光等区域所生成的深度信息可能存在错误,从而导致虚化出错。The dual-camera solution can include a main camera and a sub-camera. Both the main camera and the sub-camera can be RGB cameras, where RGB represents three channels of red (Red, R), green (Green, G), and blue (Blue, B). The colors of these three channels are mixed or superimposed in different proportions, and all the colors perceived by human vision in the image can be obtained. As a portrait blur application, the dual-camera solution has become a standard function of terminal equipment. The dual-camera blur process is shown in Figure 2. Through the four steps of dual-camera calibration, polar line correction, stereo matching and bokeh blur, This completes the function of double-camera blurring. However, the acquisition of depth information through the dual-camera scheme has many advantages, such as simple structure, low hardware power consumption, high resolution of depth information, and can also take into account most indoor and outdoor scenes; The adaptability of scenes such as repeated texture, overexposure, and underexposure is poor, so that the depth information generated by the dual-camera scheme in areas without texture, repeated texture, overexposure, and underexposure may have errors, resulting in blurring errors.
本申请实施例提供了一种深度图像的校正方法,该方法应用于终端设备。通过获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;然后根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;再基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;最后通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像;这样,通过第二深度信息对第一深度信息进行优化,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而提升了双摄人像模式下depth的准确性;另外,该目标深度信息主要用于对主彩色图像的虚化处理,还可以优化人像虚化的准确性,提升了人像虚化的效果。An embodiment of the present application provides a depth image correction method, and the method is applied to a terminal device. By acquiring the original image corresponding to the target object and the main color image and sub-color image corresponding to the target object; wherein, the original image is obtained according to the acquisition of the target object by the TOF sensor, the main color image and the sub-color image is obtained according to the acquisition of the target object by the dual cameras; then, according to the main color image and the secondary color image, the preset dual-camera algorithm is used to determine the first depth information and the first confidence level corresponding to the target object; according to For the original image, determine the second depth information corresponding to the target object; and then determine the error in the first depth information based on the first depth information, the second depth information and the first confidence level data area; finally, the error data area is corrected through the second depth information and the main color image to obtain target depth information, and a depth image is obtained according to the target depth information; in this way, through the second depth information The first depth information is optimized, which can fix the phenomenon of depth error in areas without texture, repeated texture, overexposure and underexposure in the dual-camera portrait mode, thereby improving the depth accuracy in the dual-camera portrait mode; in addition, The target depth information is mainly used for the blurring processing of the main color image, and can also optimize the accuracy of portrait blurring and improve the effect of portrait blurring.
下面将结合附图对本申请各实施例进行详细说明。The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
参见图3,其示出了本申请实施例提供的一种深度图像的校正方法的流程示意图。如图3所示,该方法可以包括:Referring to FIG. 3 , it shows a schematic flowchart of a depth image correction method provided by an embodiment of the present application. As shown in Figure 3, the method may include:
S301:获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;S301: Acquire an original image corresponding to a target object and a main color image and a sub-color image corresponding to the target object; wherein, the original image is obtained according to the acquisition of the target object by a TOF sensor, and the main color image and the sub-color The image is obtained according to the acquisition of the target object by the dual cameras;
需要说明的是,该方法应用于终端设备,终端设备中包括有TOF传感器和双摄像头(主摄像头和副摄像头)等部件。这样,可以通过TOF传感器可以采集得到目标对象对应的原始图像,还可以通过双摄像头采集得到目标对象对应的主彩色图像和副彩色图像,便于后续进行深度信息的计算。It should be noted that this method is applied to a terminal device, and the terminal device includes components such as a TOF sensor and dual cameras (a main camera and a sub-camera). In this way, the original image corresponding to the target object can be acquired by the TOF sensor, and the main color image and the sub-color image corresponding to the target object can be acquired by dual cameras, which is convenient for subsequent calculation of depth information.
还需要说明的是,终端设备可以以各种形式来实施。例如,本申请中描述的终端设备可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal DigitalAssistant,PDA)、可穿戴设备、数码相机、摄像机等移动式终端,以及诸如数字TV、台式计算机等固定式终端;本申请实施例不作具体限定。It should also be noted that the terminal device may be implemented in various forms. For example, the terminal devices described in this application may include mobile terminals such as mobile phones, tablet computers, notebook computers, PDAs, personal digital assistants (PDAs), wearable devices, digital cameras, video cameras, etc., as well as mobile terminals such as digital TVs , desktop computers and other fixed terminals; the embodiments of this application are not specifically limited.
在一些实施例中,对于S301来说,所述获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像,可以包括:In some embodiments, for S301, the obtaining the original image corresponding to the target object and the primary color image and the secondary color image corresponding to the target object may include:
S301a:通过TOF传感器对所述目标对象进行采集,获取所述目标对象对应的原始图像;S301a: collect the target object through the TOF sensor, and obtain the original image corresponding to the target object;
S301b:通过双摄像头对所述目标对象进行采集,获取所述目标对象在主摄像头下对应的主彩色图像以及所述目标对象在副摄像头下对应的副彩色图像;其中,所述双摄像头包括主摄像头和副摄像头。S301b: Collect the target object through dual cameras, and obtain a main color image corresponding to the target object under the main camera and a sub-color image corresponding to the target object under the sub camera; wherein the dual cameras include a main color image. camera and sub-camera.
需要说明的是,通过TOF传感器对目标对象的采集,可以得到目标对象对应的原始图像,比如一组RAW图;通过双摄像头对目标对象的采集,可以得到主摄像头下对应的主彩色图像,比如一张RGB主图;以及副摄像头下对应的副彩色图像,比如一张RGB副图。这样,根据双摄像头采集的主彩色图像和副彩色图像可以得到双摄模式对应的深度信息,本申请实施例用第一深度信息表示;根据TOF传感器采集的原始图像可以得到TOF模式对应的深度信息,本申请实施例用第二深度信息表示。It should be noted that, through the acquisition of the target object by the TOF sensor, the original image corresponding to the target object can be obtained, such as a set of RAW images; through the acquisition of the target object by the dual cameras, the corresponding main color image under the main camera can be obtained, such as An RGB main image; and the corresponding sub-color image under the sub-camera, such as an RGB sub-image. In this way, the depth information corresponding to the dual-camera mode can be obtained according to the main color image and the sub-color image collected by the dual cameras, which is represented by the first depth information in the embodiment of the present application; the depth information corresponding to the TOF mode can be obtained according to the original image collected by the TOF sensor , which is represented by the second depth information in this embodiment of the present application.
示例性地,参见图4,其示出了本申请实施例提供的一种终端设备的硬件结构示意图。如图4所示,终端设备可以包括有应用处理器(Application Processor,AP)、主摄像头、副摄像头、TOF传感器和激光(Laser)发射器;其中,AP侧包括有第一图像信号处理器(FirstImage Signal Processor,ISP1)、第二图像信号处理器(Second Image SignalProcessor,ISP2)和移动行业处理器接口(Mobile Industry Processor Interface,MIPI);除此之外,AP侧还放置有预设算法,比如预设双摄算法、预设标定算法等等,本申请实施例不作具体限定。4, which shows a schematic diagram of a hardware structure of a terminal device provided by an embodiment of the present application. As shown in FIG. 4 , the terminal device may include an application processor (Application Processor, AP), a main camera, a sub-camera, a TOF sensor, and a laser (Laser) transmitter; wherein, the AP side includes a first image signal processor ( First Image Signal Processor (ISP1), Second Image Signal Processor (ISP2), and Mobile Industry Processor Interface (MIPI); in addition, preset algorithms are placed on the AP side, such as The preset dual-camera algorithm, the preset calibration algorithm, etc., are not specifically limited in the embodiments of the present application.
结合图4所示的终端设备,双摄模式下,终端设备在对目标对象进行图像采集时,可以由AP侧通过两个ISP分别连接两路摄像头,以获取两路RGB数据,同时保证了双摄模式下的帧同步和3A同步;其中,3A同步包括自动对焦(Automatic Focus,AF)、自动曝光(Automatic Exposure,AE)以及自动白平衡(Automatic White Balance,AWB)的同步。在图4中,由主摄像头对目标对象进行采集,并将获取到的主彩色图像(一路RGB数据)送入ISP1;由副摄像头对目标对象进行采集,并将获取到的副彩色图像(另一路RGB数据)送入ISP2。另外,终端设备还可以通过驱动集成电路(Integrated Circuit,IC)来保证激光(Laser)与红外线(Infrared Radiation,IR)的曝光时序要求,并且要求IR曝光和主摄像头对应的RGB曝光同步;具体地,可以使用软件同步方式或硬件同步方式来实现。结合AP侧放置的预设算法,就可以计算得到目标对象对应的第一深度信息。而TOF模式下,终端设备可以通过TOF传感器采集得到一组RAW图,以获取到目标对象对应的第二深度信息。这样,后续可以根据第一深度信息和第二深度信息在主摄坐标系中的depth融合,能够纠正双摄模式下第一深度信息中的错误区域,从而达到提升depth准确性的目的。Combined with the terminal device shown in Figure 4, in the dual-camera mode, when the terminal device captures images of the target object, the AP side can connect two cameras respectively through two ISPs to obtain two channels of RGB data, while ensuring dual cameras. Frame synchronization and 3A synchronization in the shooting mode; wherein, 3A synchronization includes automatic focus (Automatic Focus, AF), automatic exposure (Automatic Exposure, AE) and automatic white balance (Automatic White Balance, AWB) synchronization. In Figure 4, the main camera collects the target object, and sends the obtained main color image (one channel of RGB data) to ISP1; the sub-camera collects the target object, and the obtained sub-color image (other All the way RGB data) into ISP2. In addition, the terminal device can also drive the integrated circuit (IC) to ensure the exposure timing requirements of the laser (Laser) and the infrared (Infrared Radiation, IR), and require the IR exposure to be synchronized with the RGB exposure corresponding to the main camera; specifically , which can be implemented by software synchronization or hardware synchronization. Combined with the preset algorithm placed on the AP side, the first depth information corresponding to the target object can be obtained by calculation. In the TOF mode, the terminal device can acquire a set of RAW images through the TOF sensor, so as to obtain the second depth information corresponding to the target object. In this way, in the future, according to the depth fusion of the first depth information and the second depth information in the main camera coordinate system, the error area in the first depth information in the dual camera mode can be corrected, so as to achieve the purpose of improving the depth accuracy.
S302:根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;S302: According to the primary color image and the secondary color image, use a preset dual-camera algorithm to determine first depth information and a first confidence level corresponding to the target object; according to the original image, determine that the target object corresponds to the second depth information of ;
需要说明的是,第一置信度用于表征第一深度信息的准确度,预设双摄算法用于表示预先设置的基于双摄像头立体匹配的算法或模型。具体地,根据主彩色图像和副彩色图像,通过预设双摄算法进行第一深度信息和第一置信度的计算,可以获取到双摄模式下的第一深度信息以及第一置信度;而根据原始图像进行深度信息的计算,可以获取到TOF模式下的第二深度信息;这样,后续可以根据TOF模式下的第二深度信息对双摄模式下的第一深度信息进行优化。It should be noted that the first confidence level is used to represent the accuracy of the first depth information, and the preset dual-camera algorithm is used to represent a preset algorithm or model based on dual-camera stereo matching. Specifically, according to the primary color image and the secondary color image, the first depth information and the first confidence level are calculated by a preset dual-camera algorithm, and the first depth information and the first confidence level in the dual-camera mode can be obtained; and By calculating the depth information according to the original image, the second depth information in the TOF mode can be obtained; in this way, the first depth information in the dual-camera mode can be optimized subsequently according to the second depth information in the TOF mode.
还需要说明的是,第一深度信息是根据主彩色图像和副彩色图像计算得到的,所得到的第一深度信息原本就是在主摄坐标系下的,因而不需要再对第一深度进行坐标系转换;而第二深度信息是根据原始图像计算得到的,所得到的第二深度信息则是在TOF坐标系下的,因而还需要对第二深度进行坐标系转换,以将其对齐到主摄坐标系中。It should also be noted that the first depth information is calculated based on the main color image and the sub-color image, and the obtained first depth information is originally in the main camera coordinate system, so there is no need to coordinate the first depth. The second depth information is calculated based on the original image, and the obtained second depth information is in the TOF coordinate system, so it is necessary to perform coordinate system transformation on the second depth to align it to the main in the camera coordinate system.
通常来说,主摄分辨率较大,其产生深度信息的分辨率也就偏高,以400万摄像头为例,其分辨率为2584×1938;而TOF分辨率较低,其产生深度信息的分辨率也就偏低,比如分辨率为320*240;也就是说,第一深度信息的分辨率高于第二深度信息的分辨率。这样,第二深度信息对齐到主摄坐标系中之后,第二深度信息对应的像素点是稀疏的,也就可以为后续提供一些稀疏的有效像素点。Generally speaking, the higher the resolution of the main camera, the higher the resolution of the depth information it generates. Taking a 4-megapixel camera as an example, its resolution is 2584×1938; while the TOF resolution is lower, and the depth information it generates has a higher resolution. The resolution is also low, for example, the resolution is 320*240; that is, the resolution of the first depth information is higher than the resolution of the second depth information. In this way, after the second depth information is aligned in the main camera coordinate system, the pixels corresponding to the second depth information are sparse, which can also provide some sparse effective pixels for the follow-up.
S303:基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;S303: Determine an erroneous data region in the first depth information based on the first depth information, the second depth information, and the first confidence level;
需要说明的是,错误数据区域是指第一深度信息中存在depth错误的像素点所组成的区域,而且错误数据区域通常处于第一深度信息的低置信度区域中;其中,该低置信度区域是由第一置信度确定的。It should be noted that the erroneous data area refers to an area composed of pixels with depth errors in the first depth information, and the erroneous data area is usually in the low confidence area of the first depth information; wherein, the low confidence area is determined by the first confidence level.
由于双摄模式下,第一深度信息在无纹理、重复纹理等区域存在depth出错的情况;这时候为了提升双摄模式下depth的准确性,本申请实施例就需要确定出该错误数据区域,便于后续对该错误数据区域进行depth校准。这样,在第一深度信息和第二深度信息均对齐到主摄坐标系中之后,根据第一置信度,可以确定出第一深度信息中低置信度区域;然后在该低置信度区域中,通过第一深度信息和第二深度信息,可以计算出第一深度信息中的错误数据区域,便于后续针对该错误数据区域进行校准。Because in the dual-camera mode, the first depth information has depth errors in areas without texture, repeated textures, etc. At this time, in order to improve the accuracy of the depth in the dual-camera mode, the embodiment of the present application needs to determine the erroneous data area, It is convenient to perform depth calibration on the erroneous data area subsequently. In this way, after both the first depth information and the second depth information are aligned in the main camera coordinate system, a low-confidence region in the first depth information can be determined according to the first confidence; then, in the low-confidence region, Using the first depth information and the second depth information, the erroneous data area in the first depth information can be calculated, which facilitates subsequent calibration of the erroneous data area.
S304:通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像。S304: Perform correction processing on the erroneous data area by using the second depth information and the main color image to obtain target depth information, and obtain a depth image according to the target depth information.
需要说明的是,在得到第一深度信息中的错误数据区域之后,可以通过第二深度信息可以对该错误数据区域中的像素点进行插值和修复处理,得到新的深度信息。其中,第一深度信息中的错误数据区域是由第二深度信息对其进行插值和修复处理得到的,而第一深度信息中的非错误数据区域则保留了原来的第一深度信息,这样对于第一深度信息来说,得到了新的深度信息;可见,该新的深度信息是由第一深度信息和第二深度信息进行融合得到的;为了减弱人工合成的痕迹,还可以通过主彩色图像对新的深度信息进行滤波处理,最终输出的深度信息为目标深度信息,根据该目标深度信息就得到所需的深度图像,从而解决了双摄模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,提升了深度信息的准确性。It should be noted that, after the erroneous data region in the first depth information is obtained, interpolation and repair processing can be performed on the pixels in the erroneous data region through the second depth information to obtain new depth information. Wherein, the erroneous data area in the first depth information is obtained by interpolation and repair processing of the second depth information, while the non-error data area in the first depth information retains the original first depth information, so that for For the first depth information, new depth information is obtained; it can be seen that the new depth information is obtained by the fusion of the first depth information and the second depth information; in order to reduce the traces of artificial synthesis, the main color image can also be used. The new depth information is filtered, and the final output depth information is the target depth information, and the required depth image is obtained according to the target depth information, thus solving the problem of no texture, repeated texture, and overexposure of the depth information in the dual-camera mode. , underexposure and other areas where the depth is wrong, which improves the accuracy of the depth information.
进一步地,在一些实施例中,在S304之后,该方法还可以包括:Further, in some embodiments, after S304, the method may further include:
根据所述深度图像对所述主彩色图像进行虚化处理,得到目标图像。The main color image is blurred according to the depth image to obtain a target image.
需要说明的是,根据所获取的深度图像对主彩色图像进行虚化处理,可以得到所需的目标图像;其中,目标图像可以是散景图像。另外,由于本申请实施例的算法复杂度较高,该深度图像的校正方法主要运用于双摄人像虚化的拍照模式中,并不能应用于预览模式。具体地,本申请实施例主要是针对虚化、抠图等应用,通过结合TOF的优势对双摄人像的深度信息进行优化。这样,由于该目标图像是根据所获取的深度图像对主彩色图像进行虚化处理得到的,使得该目标图像已经修复了双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,提升了双摄人像模式下depth的准确性,从而优化了人像虚化的准确性。It should be noted that a desired target image can be obtained by performing a blurring process on the main color image according to the acquired depth image, wherein the target image may be a bokeh image. In addition, due to the high complexity of the algorithm in the embodiment of the present application, the depth image correction method is mainly used in the photographing mode of double-camera portrait blurring, and cannot be applied in the preview mode. Specifically, the embodiments of the present application are mainly aimed at applications such as blurring and matting, and optimize the depth information of the dual-camera portrait by combining the advantages of TOF. In this way, since the target image is obtained by blurring the main color image according to the acquired depth image, the target image has been repaired for no texture, repeated texture, overexposure and underexposure of the depth information in the dual-camera portrait mode. The phenomenon of depth errors in other areas improves the accuracy of depth in dual-camera portrait mode, thereby optimizing the accuracy of portrait blurring.
本实施例提供了一种深度图像的校正方法,该方法应用于终端设备。获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;然后根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;再基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;最后通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像;这样,由于第二深度信息是由TOF模式得到的,第一深度信息是由双摄模式得到的,通过第二深度信息对第一深度信息进行优化,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而实现了TOF对双摄人像模式的优化,提升了双摄人像模式下depth的准确性;另外,该目标深度信息主要用于对主彩色图像的虚化处理,还可以优化人像虚化的准确性,提升了人像虚化的效果。This embodiment provides a depth image correction method, which is applied to a terminal device. Obtain the original image corresponding to the target object and the main color image and the sub-color image corresponding to the target object; wherein, the original image is obtained according to the acquisition of the target object by the TOF sensor, and the main color image and the sub-color image are According to the acquisition of the target object by the dual cameras; then according to the main color image and the secondary color image, use a preset dual-camera algorithm to determine the first depth information and the first confidence level corresponding to the target object; the original image, and determine the second depth information corresponding to the target object; and then determine the erroneous data in the first depth information based on the first depth information, the second depth information and the first confidence level Finally, the error data area is corrected and processed through the second depth information and the main color image to obtain target depth information, and a depth image is obtained according to the target depth information; in this way, since the second depth information is composed of The first depth information is obtained from the TOF mode, and the first depth information is obtained from the dual-camera mode. The second depth information is used to optimize the first depth information, which can repair the depth information in the dual-camera portrait mode. The phenomenon of depth error in areas such as exposure, thus realizing the optimization of TOF for dual-camera portrait mode and improving the accuracy of depth in dual-camera portrait mode; in addition, the target depth information is mainly used for blurring the main color image. It can also optimize the accuracy of portrait blur, and improve the effect of portrait blur.
在本申请的另一实施例中,由于摄像头的透镜精度和工艺会引入畸变,以造成图像失真;而且双摄模式下,主摄像头和副摄像头的光轴并不平行;这时候在计算双摄模式下的第一深度信息之前,还需要对主彩色图像和副彩色图像进行畸变校正和极线校正处理。因此,在一些实施例中,对于S302来说,所述根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度,可以包括:In another embodiment of the present application, due to the lens precision and process of the camera, distortion will be introduced to cause image distortion; and in the dual-camera mode, the optical axes of the main camera and the sub-camera are not parallel; at this time, the calculation of the dual-camera Before obtaining the first depth information in the mode, distortion correction and epipolar correction processing also need to be performed on the main color image and the sub-color image. Therefore, in some embodiments, for S302, the first depth information and the first confidence level corresponding to the target object are determined according to the primary color image and the secondary color image by using a preset dual-camera algorithm , which can include:
S302a:对所述主彩色图像进行畸变校正处理,得到校正后的主彩色图像;S302a: Perform distortion correction processing on the main color image to obtain a corrected main color image;
S302b:对所述副彩色图像进行畸变校正和极线校正处理,得到校正后的副彩色图像;S302b: Perform distortion correction and epipolar correction processing on the secondary color image to obtain a corrected secondary color image;
需要说明的是,主摄像头或副摄像头的成像过程实际上是将世界坐标系的坐标点转换到主摄坐标系的过程。由于摄像头的透镜精度和工艺会引入畸变(所谓畸变,具体是指在世界坐标系中的直线转换到其他坐标系将不再是直线),从而导致图像失真,因此需要对主彩色图像和副彩色图像进行畸变校正。另外,为了实现主摄像头和副摄像头的光轴完全平行,使得目标对象的同一像素点在主彩色图像和副彩色图像中的高度一致,还需要对副彩色图像进行极线校正,比如可以采用Bouguet极线校正算法。具体地,由于在校正前的主摄像头和副摄像头之间的光轴(也称为基线)并不是平行的,而极线校正的目标是主摄像头和副摄像头之间的光轴完全平行;这样,经过畸变校正和极线校正之后,可以按照相同的视场角(Field of Vision,FOV)标准平行双目视觉的图像。It should be noted that the imaging process of the main camera or the sub-camera is actually a process of converting the coordinate points of the world coordinate system to the main camera coordinate system. Because the lens precision and craftsmanship of the camera will introduce distortion (the so-called distortion, specifically, the straight line in the world coordinate system will no longer be a straight line when converted to other coordinate systems), resulting in image distortion. The image is distortion corrected. In addition, in order to realize that the optical axes of the main camera and the sub-camera are completely parallel, so that the same pixel of the target object has the same height in the main color image and the sub-color image, it is also necessary to perform epipolar correction on the sub-color image. For example, Bouguet can be used. Polar correction algorithm. Specifically, since the optical axes (also called baselines) between the main camera and the sub-camera before correction are not parallel, the goal of epipolar correction is that the optical axes between the main camera and the sub-camera are completely parallel; thus , after distortion correction and epipolar correction, the images of binocular vision can be paralleled according to the same Field of Vision (FOV) standard.
还需要注意的是,在获取到主彩色图像和副彩色图像之后,还可以根据预设比例对其进行缩放,获得低分辨率的彩色图像;然后根据双摄像头对应的标定参数对该低分辨率的彩色图像进行畸变校正和极线校正处理,以获取到校正后的主彩色图像和校正后的副彩色图像。其中,双摄像头对应的标定参数可以是根据预设标定算法进行计算获得的,比如张正友标定法;也可以是根据双摄像头的生产厂商或供应商直接提供而获得的。另外,预设比例是根据目标分辨率预先设置的比例值,在实际应用中,根据实际情况进行设定,本申请实施例不作具体限定。It should also be noted that after acquiring the primary color image and the secondary color image, it can also be scaled according to the preset ratio to obtain a low-resolution color image; then the low-resolution color image can be obtained according to the calibration parameters corresponding to the dual cameras. The corrected color image is subjected to distortion correction and epipolar correction processing to obtain the corrected primary color image and the corrected secondary color image. The calibration parameters corresponding to the dual cameras may be obtained by calculation according to a preset calibration algorithm, such as Zhang Zhengyou's calibration method, or may be obtained directly from the manufacturer or supplier of the dual cameras. In addition, the preset ratio is a ratio value preset according to the target resolution. In practical applications, the preset ratio is set according to the actual situation, which is not specifically limited in the embodiment of the present application.
由于设置于终端设备上的主摄像头与副摄像头之间的极线并不平行,使得目标对象中同一像素点在主彩色图像中的高度与该像素点在副彩色图像中的高度并不一致,而进行极线校正之后,目标对象同一像素点在主彩色图像中的高度和副彩色图像中的高度相一致。如此,当主彩色图像与副彩色图像进行立体匹配时,只需要在同一行上寻找相匹配的像素点。Since the polar line between the main camera and the sub-camera set on the terminal device is not parallel, the height of the same pixel in the target object in the main color image is inconsistent with the height of the pixel in the sub-color image, and After epipolar correction, the height of the same pixel of the target object in the main color image is consistent with the height in the sub-color image. In this way, when stereo matching is performed between the primary color image and the secondary color image, it is only necessary to search for matching pixels on the same row.
示例性地,参见图5,其示出了本申请实施例提供的一种极线校正的效果对比示意图。在图5中,对主摄像头和副摄像头进行极线校正之前,两者的光轴并不平行,如(a)所示;这时候针对目标对象的像素1,在所得到的主彩色图像中的高度与副彩色图像的高度并不一致,如(b)所示;为了能够进行第一深度信息的计算,需要对主摄像头和副摄像头进行极线校正,这样经过极线校正之后,主摄像头和副摄像头的光轴是完全平行的,如(c)所示;这时候像素1在主彩色图像中所处的高度和像素1在副彩色图像中所处的高度是一致的,如(d)所示;这样,当主彩色图像与副彩色图像进行立体匹配时,只需要在同一行上寻找相匹配的像素点,从而大幅度提高了效率。Exemplarily, referring to FIG. 5 , it shows a schematic diagram of comparison of effects of a polar line correction provided by an embodiment of the present application. In Figure 5, before the epipolar correction is performed on the main camera and the sub-camera, the optical axes of the two are not parallel, as shown in (a); at this time, for pixel 1 of the target object, in the obtained main color image is inconsistent with the height of the secondary color image, as shown in (b); in order to be able to calculate the first depth information, it is necessary to perform epipolar correction on the main camera and the secondary camera, so that after epipolar correction, the main camera and The optical axis of the secondary camera is completely parallel, as shown in (c); at this time, the height of pixel 1 in the main color image is the same as the height of pixel 1 in the secondary color image, as shown in (d) In this way, when the main color image and the sub-color image are stereo matched, it is only necessary to search for matching pixels on the same line, thereby greatly improving the efficiency.
S302c:针对所述目标对象中的每个像素,基于校正后的主彩色图像和校正后的副彩色图像,利用预设双摄算法确定每个像素对应的第一深度信息和每个像素对应的第一置信度;其中,所述第一置信度用于表征所述第一深度信息的准确度。S302c: For each pixel in the target object, based on the corrected primary color image and the corrected secondary color image, use a preset dual-camera algorithm to determine the first depth information corresponding to each pixel and the corresponding depth information for each pixel A first confidence level; wherein, the first confidence level is used to represent the accuracy of the first depth information.
需要说明的是,在获取到校正后的主彩色图像和校正后的副彩色图像之后,可以根据校正后的主彩色图像和校正后的副彩色图像来确定目标对象中每个像素对应的第一深度信息以及第一置信度;其中,第一深度信息和第一置信度是以像素为单位而言的。It should be noted that, after the corrected main color image and the corrected sub-color image are obtained, the first color corresponding to each pixel in the target object can be determined according to the corrected main color image and the corrected sub-color image. Depth information and a first confidence level; wherein, the first depth information and the first confidence level are in units of pixels.
进一步地,在一些实施例中,对于S302c来说,所述基于校正后的主彩色图像和校正后的副彩色图像,确定每个像素对应的第一深度信息,可以包括:Further, in some embodiments, for S302c, determining the first depth information corresponding to each pixel based on the corrected primary color image and the corrected secondary color image may include:
通过双摄匹配算法对校正后的主彩色图像和校正后的副彩色图像进行视差匹配计算,得到每个像素对应的视差值;The parallax matching calculation is performed on the corrected primary color image and the corrected secondary color image through the dual-camera matching algorithm, and the parallax value corresponding to each pixel is obtained;
通过第一预设转换模型对所述视差值进行深度转换,得到每个像素对应的第一深度信息。Depth conversion is performed on the disparity value by using a first preset conversion model to obtain first depth information corresponding to each pixel.
需要说明的是,双摄匹配算法是预先设置的用于视差计算的算法或模型,属于预设双摄算法中计算视差的经典算法;其中,双摄匹配算法可以是半全局匹配(Semi-GlobalMatching,SGM)算法、或者跨规模代价聚合(Cross-Scale Cost Aggregation,CSCA)算法等,本申请实施例不作具体限定。示例性地,参见图6,其示出了本申请实施例提供的一种双摄视差计算的效果示意图。在图6中,可以看到针对(a)和(b)的两幅图像进行视差匹配计算,最终得到如(c)所示的视差效果图。It should be noted that the dual-camera matching algorithm is a preset algorithm or model for parallax calculation, which belongs to the classical algorithm for calculating parallax in the preset dual-camera algorithm; wherein, the dual-camera matching algorithm may be semi-global matching (Semi-GlobalMatching). , SGM) algorithm, or cross-scale cost aggregation (Cross-Scale Cost Aggregation, CSCA) algorithm, etc., which are not specifically limited in this embodiment of the present application. For example, referring to FIG. 6 , it shows a schematic diagram of the effect of a dual-camera parallax calculation provided by an embodiment of the present application. In Figure 6, it can be seen that the disparity matching calculation is performed on the two images (a) and (b), and finally the disparity effect diagram shown in (c) is obtained.
另外,第一预设转换模型是预设设置的用于视差深度转换的模型,通常是指利用视差值以及预设成像参数计算深度信息的三角测距模型;其中,本申请实施例中的预设成像参数中可以包括基线距离和焦距。例如,第一预设转换模型可以是Z=Baseline*focal/disparity,这里,Z表示深度信息,Baseline表示基线或者光轴的距离,focal表示焦距,disparity表示视差值;但是本申请实施例对第一预设转换模型也不作具体限定。In addition, the first preset conversion model is a preset model for parallax depth conversion, which usually refers to a triangulation ranging model that uses parallax values and preset imaging parameters to calculate depth information; The preset imaging parameters may include baseline distance and focal length. For example, the first preset conversion model may be Z=Baseline*focal/disparity, where Z represents depth information, Baseline represents the distance of the baseline or the optical axis, focal represents the focal length, and disparity represents the disparity value; The first preset conversion model is also not specifically limited.
示例性地,参见图7,其示出了本申请实施例提供的一种计算深度信息的模型示意图。如图7所示,OR为主摄像头所在的位置,OT为副摄像头所在的位置,OR与OT之间的距离为基线距离,用b表示;P为目标对象所在的位置,P1为终端设备通过主摄像头采集目标对象P时所获得的像点,P1’为终端设备通过副摄像头采集目标对象P时所获得的像点,xR为目标对象的像点P1在主彩色图像中的坐标,xT为目标对象的像点P1’在副彩色图像中的坐标,f为主摄像头与副摄像头之间的焦距。此时,由相似三角形可知进而,可得到其中,d为视差值。因此,终端设备只需要知晓基线距离b、焦距f以及视差值d之后,就可以根据第一预设转换模型(比如),计算出每个像素点对应的第一深度信息。7, which shows a schematic diagram of a model for calculating depth information provided by an embodiment of the present application. As shown in Figure 7, OR is the location of the main camera, O T is the location of the secondary camera, the distance between OR and O T is the baseline distance, denoted by b; P is the location of the target object, P 1 is the image point obtained when the terminal device collects the target object P through the main camera, P 1 ′ is the image point obtained when the terminal device collects the target object P through the sub-camera, and x R is the image point P of the target object in the main camera. The coordinates in the color image, x T is the coordinates of the image point P 1 ' of the target object in the sub-color image, and f is the focal length between the main camera and the sub-camera. At this time, it can be seen from the similar triangle Furthermore, it can be obtained where d is the disparity value. Therefore, after the terminal device only needs to know the baseline distance b, the focal length f and the parallax value d, it can convert the model according to the first preset (for example, ) to calculate the first depth information corresponding to each pixel.
在获取到校正后的主彩色图像和校正后的副彩色图像之后,还可以进行第一置信度的确定。本申请实施例中,第一置信度可以根据校正后的主彩色图像和校正后的副彩色图像之间的匹配相似性代价计算得到,也可以根据主彩色图像和副彩色图像对应的纹理梯度差异计算得到,本申请实施例不作具体限定。After the corrected primary color image and the corrected secondary color image are acquired, the first confidence level may also be determined. In the embodiment of the present application, the first confidence level may be calculated according to the matching similarity cost between the corrected main color image and the corrected secondary color image, or may be calculated according to the texture gradient difference corresponding to the primary color image and the secondary color image It can be obtained by calculation, which is not specifically limited in the embodiments of the present application.
进一步地,在一些实施例中,对于S302c来说,所述基于校正后的主彩色图像和校正后的副彩色图像,确定每个像素对应的第一置信度,可以包括:Further, in some embodiments, for S302c, determining the first confidence level corresponding to each pixel based on the corrected primary color image and the corrected secondary color image may include:
对校正后的主彩色图像和校正后的副彩色图像进行匹配相似性计算,得到每个像素对应的匹配相似性代价;The matching similarity calculation is performed on the corrected primary color image and the corrected secondary color image, and the matching similarity cost corresponding to each pixel is obtained;
基于所述匹配相似性代价,确定每个像素对应的第一置信度。Based on the matching similarity cost, a first confidence level corresponding to each pixel is determined.
需要说明的是,通过对校正后的主彩色图像和校正后的副彩色图像进行匹配相似性计算,可以得到目标对象中每个像素对应的匹配相似性代价;其中,匹配相似性代价的具体计算方式,实际应用中,可以根据实际情况进行设置,本申请实施例不作具体限定。It should be noted that, by performing matching similarity calculation on the corrected main color image and the corrected secondary color image, the matching similarity cost corresponding to each pixel in the target object can be obtained; wherein, the specific calculation of the matching similarity cost In practical application, it can be set according to the actual situation, which is not specifically limited in the embodiment of the present application.
还需要说明的是,在得到匹配相似性代价之后,可以根据匹配相似性代价,进一步确定出每个像素对应的第一置信度。具体地,终端设备可以设定代价阈值,将每个像素得到的匹配相似性代价与代价阈值进行比较,以确定第一置信度;比如当该像素对应的匹配相似性代价大于代价阈值时,表明了校正后的主彩色图像和校正后的副彩色图像针对该像素而言仍然有较大概率的错误匹配,此时该像素对应的第一置信度偏低。另外,针对每个像素进行匹配相似性计算,可以得到最小的匹配相似性代价和次最小的匹配相似性代价;如果最小的匹配相似性代价与次最小的匹配相似性代价比较接近,那么也可以表明该像素对应的第一置信度偏低。It should also be noted that, after the matching similarity cost is obtained, the first confidence level corresponding to each pixel can be further determined according to the matching similarity cost. Specifically, the terminal device can set a cost threshold, and compare the matching similarity cost obtained by each pixel with the cost threshold to determine the first confidence level; for example, when the matching similarity cost corresponding to the pixel is greater than the cost threshold, it indicates that The corrected primary color image and the corrected secondary color image still have a high probability of false matching for this pixel, and at this time, the first confidence level corresponding to this pixel is relatively low. In addition, the matching similarity calculation is performed for each pixel, and the minimum matching similarity cost and the second-minimum matching similarity cost can be obtained; if the minimum matching similarity cost is close to the second-minimum matching similarity cost, then It indicates that the first confidence level corresponding to the pixel is low.
进一步地,在一些实施例中,对于S302c来说,所述基于校正后的主彩色图像和校正后的副彩色图像,确定每个像素对应的第一置信度,可以包括:Further, in some embodiments, for S302c, determining the first confidence level corresponding to each pixel based on the corrected primary color image and the corrected secondary color image may include:
计算每个像素在校正后的主彩色图像下对应的第一纹理梯度;Calculate the first texture gradient corresponding to each pixel under the corrected main color image;
基于所述第一纹理梯度,确定每个像素对应的第一置信度。Based on the first texture gradient, a first confidence level corresponding to each pixel is determined.
需要说明的是,第一置信度还可以和纹理丰富性有关。根据校正后的主彩色图像,可以计算出每个像素对应的第一纹理梯度;这样,根据第一纹理梯度,就可以确定出第一置信度。其中,具体纹理梯度的计算方式,实际应用中,可以根据实际情况进行设置,本申请实施例不作具体限定。It should be noted that the first confidence level may also be related to texture richness. According to the corrected main color image, the first texture gradient corresponding to each pixel can be calculated; in this way, the first confidence level can be determined according to the first texture gradient. The specific calculation method of the texture gradient may be set according to the actual situation in practical applications, which is not specifically limited in the embodiment of the present application.
本实施例提供了一种深度图像的校正方法,该方法应用于终端设备。本实施例对前述实施例的具体实现进行了详细阐述,从中可以看出,通过本实施例的技术方案,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而实现了TOF对双摄人像模式的优化,提升了双摄人像模式下depth的准确性;另外,该目标深度信息主要用于对主彩色图像的虚化处理,还可以优化人像虚化的准确性,提升了人像虚化的效果。This embodiment provides a depth image correction method, which is applied to a terminal device. This embodiment describes the specific implementation of the foregoing embodiments in detail, from which it can be seen that through the technical solution of this embodiment, the depth information in the dual-camera portrait mode can be repaired in the absence of texture, repeated texture, overexposure, underexposure, etc. The phenomenon that the depth of the area is wrong, thus realizing the optimization of the TOF for the dual-camera portrait mode, and improving the accuracy of the depth in the dual-camera portrait mode; in addition, the target depth information is mainly used for the blurring of the main color image. Optimize the accuracy of portrait blur and improve the effect of portrait blur.
在本申请的又一实施例中,第一深度信息是根据主彩色图像和副彩色图像计算得到的,所得到的第一深度信息原本就是在主摄坐标系下的,因而不需要再对第一深度进行坐标系转换;而第二深度信息是根据原始图像计算得到的,所得到的第二深度信息则是在TOF坐标系下的,因而还需要对第二深度进行坐标系转换,以将其对齐到主摄坐标系中。因此,在一些实施例中,对于S302来说,所述根据所述原始图像,确定所述目标对象对应的第二深度信息,可以包括:In another embodiment of the present application, the first depth information is calculated based on the main color image and the sub-color image, and the obtained first depth information is originally in the main camera coordinate system, so there is no need to re-calculate the first depth information. The coordinate system conversion is performed for one depth; the second depth information is calculated based on the original image, and the obtained second depth information is in the TOF coordinate system, so it is also necessary to perform coordinate system conversion on the second depth to convert the It is aligned to the main camera coordinate system. Therefore, in some embodiments, for S302, the determining, according to the original image, the second depth information corresponding to the target object may include:
S302d:根据所述原始图像,得到所述目标对象中每个像素在TOF坐标系下的初始深度信息;S302d: According to the original image, obtain the initial depth information of each pixel in the target object in the TOF coordinate system;
需要说明的是,通过TOF传感器对所述目标对象进行采集,获取所述目标对象对应的原始图像(比如一组RAW图);这样,根据该原始图像进行深度信息的计算,可以得到目标对象中每个像素在TOF坐标系下的初始深度信息;其中,该初始深度信息的计算方式可以采用四相位法。It should be noted that the target object is collected by the TOF sensor, and the original image (such as a set of RAW images) corresponding to the target object is obtained; in this way, the depth information of the target object can be calculated according to the original image. The initial depth information of each pixel in the TOF coordinate system; wherein, the calculation method of the initial depth information may adopt the four-phase method.
S302e:通过第二预设转换模型对所述初始深度信息进行坐标系转换,得到每个像素在主摄坐标系下的第二深度信息。S302e: Perform coordinate system transformation on the initial depth information by using a second preset transformation model to obtain second depth information of each pixel in the main camera coordinate system.
需要说明的是,第二预设转换模型是预设设置的用于坐标系转换的模型,比如将TOF坐标系下的坐标转换到主摄坐标系中。这样,根据第二预设转换模型可以将TOF坐标系下的初始深度信息转换为主摄坐标系下的第二深度信息,以实现像素对齐。It should be noted that the second preset transformation model is a preset model for coordinate system transformation, for example, transforming the coordinates in the TOF coordinate system into the main camera coordinate system. In this way, the initial depth information in the TOF coordinate system can be converted into the second depth information in the main camera coordinate system according to the second preset conversion model, so as to achieve pixel alignment.
进一步地,在一些实施例中,在S302e之前,该方法还可以包括:Further, in some embodiments, before S302e, the method may further include:
根据预设标定算法对TOF传感器和双摄像头之间进行标定,获得标定参数;According to the preset calibration algorithm, the TOF sensor and the dual camera are calibrated to obtain the calibration parameters;
相应地,所述通过第二预设转换模型对所述初始深度信息进行坐标系转换,得到每个像素在主摄坐标系下的第二深度信息,可以包括:Correspondingly, performing coordinate system transformation on the initial depth information through the second preset transformation model to obtain the second depth information of each pixel in the main camera coordinate system may include:
基于所述标定参数以及第二预设转换模型,将所述初始深度信息转换到主摄坐标系中,得到每个像素在主摄坐标系下的第二深度信息。Based on the calibration parameters and the second preset conversion model, the initial depth information is converted into the main camera coordinate system to obtain second depth information of each pixel in the main camera coordinate system.
需要说明的是,在进行像素对齐之前,还需要先对TOF传感器和双摄像头进行双摄标定。其中,该标定参数可以是根据预设标定算法进行计算获得的,比如张正友标定法;也可以是根据双摄像头的生产厂商或供应商直接提供而获得的,本申请实施例不作具体限定。It should be noted that, before performing pixel alignment, it is also necessary to perform dual camera calibration on the TOF sensor and dual camera. The calibration parameters may be obtained by calculation according to a preset calibration algorithm, such as Zhang Zhengyou's calibration method, or may be obtained directly from the manufacturer or supplier of the dual camera, which is not specifically limited in the embodiments of the present application.
这样,在得到标定参数之后,可以根据该标定参数以及第二预设转换模型,将初始深度信息转换到主摄坐标系中,得到了每个像素在主摄坐标系下的第二深度信息,从而实现了第一深度信息和第二深度信息的像素对齐,有利于后续对第一深度信息和第二深度信息的融合,以实现对第一深度信息中错误数据区域的校正。需要注意的是,通常主摄分辨率更大,其产生深度信息的分辨率一般会大于TOF产生深度信息的分辨率;如此,初始深度信息对齐到主摄坐标系之后,所得到的第二深度信息中像素是稀疏的,有利于为后续的深度信息融合提供一些稀疏的有效像素点。In this way, after the calibration parameters are obtained, the initial depth information can be converted into the main camera coordinate system according to the calibration parameters and the second preset conversion model, and the second depth information of each pixel in the main camera coordinate system is obtained, Thus, the pixel alignment of the first depth information and the second depth information is realized, which is beneficial to the subsequent fusion of the first depth information and the second depth information, so as to realize the correction of the erroneous data area in the first depth information. It should be noted that the resolution of the main camera is usually larger, and the resolution of the depth information generated by it is generally larger than the resolution of the depth information generated by the TOF; in this way, after the initial depth information is aligned to the main camera coordinate system, the obtained second depth The pixels in the information are sparse, which is beneficial to provide some sparse effective pixels for the subsequent depth information fusion.
本实施例提供了一种深度图像的校正方法,该方法应用于终端设备。本实施例对前述实施例的具体实现进行了详细阐述,从中可以看出,通过本实施例的技术方案,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而实现了TOF对双摄人像模式的优化,提升了双摄人像模式下depth的准确性;另外,该目标深度信息主要用于对主彩色图像的虚化处理,还可以优化人像虚化的准确性,提升了人像虚化的效果。This embodiment provides a depth image correction method, which is applied to a terminal device. This embodiment describes the specific implementation of the foregoing embodiments in detail, from which it can be seen that through the technical solution of this embodiment, the depth information in the dual-camera portrait mode can be repaired in the absence of texture, repeated texture, overexposure, underexposure, etc. The phenomenon that the depth of the area is wrong, thus realizing the optimization of the TOF for the dual-camera portrait mode, and improving the accuracy of the depth in the dual-camera portrait mode; in addition, the target depth information is mainly used for the blurring of the main color image. Optimize the accuracy of portrait blur and improve the effect of portrait blur.
在本申请的再一实施例中,对于第一深度信息中的错误数据区域,该错误数据区域通常位于第一深度信息的低置信度区域中,而且可以通过对第一深度信息和第二深度信息之间的差值判断来具体确定。因此,在一些实施例中,对于S303来说,所述基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域,可以包括:In yet another embodiment of the present application, as for the erroneous data region in the first depth information, the erroneous data region is usually located in a low confidence region of the first depth information, and it can be determined by comparing the first depth information and the second depth information. The difference between the information is judged to be specifically determined. Therefore, in some embodiments, for S303, the erroneous data region in the first depth information is determined based on the first depth information, the second depth information and the first confidence level, Can include:
S303a:根据所述第一置信度,确定所述第一深度信息中的低置信度区域;S303a: Determine a low-confidence region in the first depth information according to the first confidence;
需要说明的是,第一深度信息中的低置信度区域可以是由第一置信度确定的,而第一置信度与匹配相似性代价、纹理丰富性有关。也就是说,可以根据第一深度信息中的像素与第二深度信息中像素的匹配相似性代价来确定第一深度信息中的低置信度区域,也可以根据第一深度信息中像素的纹理梯度与第二深度信息中像素的纹理梯度来确定第一深度信息中的低置信度区域。It should be noted that the low confidence region in the first depth information may be determined by the first confidence level, and the first confidence level is related to the matching similarity cost and texture richness. That is, the low-confidence region in the first depth information can be determined according to the matching similarity cost between the pixels in the first depth information and the pixels in the second depth information, or the texture gradient of the pixels in the first depth information can be determined. The low-confidence region in the first depth information is determined with the texture gradient of the pixels in the second depth information.
另外,假定第一置信度阈值是用于衡量第一置信度是否属于低置信度的判定值;这样,还可以根据预先获取的第一置信度阈值进行判断。具体地,将第一置信度与第一置信度阈值进行比较;当第一置信度小于第一置信度阈值时,表明了该第一置信度所对应的像素归属于低置信度区域中,以此可以得到第一深度信息中的低置信度区域。In addition, it is assumed that the first confidence level threshold is a judgment value used to measure whether the first confidence level belongs to a low confidence level; in this way, the judgment can also be made according to the pre-acquired first confidence level threshold. Specifically, the first confidence level is compared with the first confidence level threshold; when the first confidence level is smaller than the first confidence level threshold, it indicates that the pixel corresponding to the first confidence level belongs to the low confidence level area, so as to This can result in low confidence regions in the first depth information.
S303b:针对所述低置信度区域中的每个待判断像素,计算每个待判断像素对应的第一深度信息与该待判断像素的有效邻域内对应的第二深度信息之间的差值;S303b: for each pixel to be judged in the low confidence area, calculate the difference between the first depth information corresponding to each pixel to be judged and the second depth information corresponding to the effective neighborhood of the pixel to be judged;
需要说明的是,第二深度信息中的有效邻域是指与该待判断像素最邻近的区域,而且该有效邻域的大小有所限制,比如可以是5*5或者7*7;但是有效邻域的大小,实际应用中,可以根据实际情况进行设置,本申请实施例不作具体限定。It should be noted that the effective neighborhood in the second depth information refers to the area closest to the pixel to be judged, and the size of the effective neighborhood is limited, for example, it can be 5*5 or 7*7; but effective The size of the neighborhood may be set according to actual conditions in practical applications, which is not specifically limited in this embodiment of the present application.
这样,在该有效邻域中存在有与第二深度信息相关联的有效像素点,从而可以将待判断像素对应的第一深度信息与该有效像素点对应的第二深度信息进行差值计算,得到两者之间的差值;便于后续根据该差值的大小进一步确定待判断像素是否为错误点。In this way, there are valid pixels associated with the second depth information in the valid neighborhood, so that the difference between the first depth information corresponding to the pixel to be determined and the second depth information corresponding to the valid pixel can be calculated, The difference between the two is obtained; it is convenient to further determine whether the pixel to be judged is an error point according to the size of the difference.
S303c:将所述差值与预设差值阈值进行比较;S303c: Compare the difference with a preset difference threshold;
需要说明的是,预设差值阈值是预先设定的用于衡量待判断像素是否为错误点的判定值。这样,在步骤S303c之后,根据差值与预设差值阈值的比较结果,当差值大于预设差值阈值时,执行步骤S303d;当差值不大于预设差值阈值时,执行步骤S303e。It should be noted that the preset difference threshold is a preset judgment value used to measure whether the pixel to be judged is an error point. In this way, after step S303c, according to the comparison result between the difference and the preset difference threshold, when the difference is greater than the preset difference threshold, execute step S303d; when the difference is not greater than the preset difference threshold, execute step S303e .
S303d:当所述差值大于预设差值阈值时,将该待判断像素标记为错误点,根据标记的错误点,得到所述第一深度信息中的错误数据区域;S303d: when the difference is greater than a preset difference threshold, mark the pixel to be judged as an error point, and obtain an error data area in the first depth information according to the marked error point;
S303e:当所述差值不大于预设差值阈值时,保留该待判断像素对应的第一深度信息,得到所述第一深度信息中的保留数据区域。S303e: When the difference is not greater than a preset difference threshold, retain the first depth information corresponding to the pixel to be determined, and obtain a reserved data area in the first depth information.
需要说明的是,由于双摄模式下,第一深度信息在无纹理、重复纹理等区域存在depth出错的情况;这时候为了提升双摄模式下depth的准确性,本申请实施例需要确定出该错误数据区域,以将第一深度信息进行错误数据区域和原始数据区域的区分。具体地,在第一深度信息和第二深度信息均对齐到主摄坐标系中之后,根据第一置信度,可以确定出第一深度信息中低置信度区域;然后在该低置信度区域中,通过待判断像素对应的第一深度信息与该待判断像素的有效邻域内对应的第二深度信息,可以计算出两者的差值;将差值与预设差值阈值进行比较;当差值大于预设差值阈值时,可以将该待判断像素标记为错误点,根据标记的错误点,得到了第一深度信息中的错误数据区域;当差值不大于预设差值阈值时,表明了该待判断像素对应的第一深度信息是正确的,这时候需要保留该待判断像素对应的第一深度信息,得到所述第一深度信息中的保留数据区域,也就是第一深度信息中的原始数据区域,从而将第一深度信息划分为错误数据区域和原始数据区域。It should be noted that, in the dual-camera mode, the first depth information has a depth error in areas without texture, repeated textures, etc. At this time, in order to improve the accuracy of the depth in the dual-camera mode, the embodiment of the present application needs to determine the depth error. The error data area is used to distinguish the error data area from the original data area for the first depth information. Specifically, after both the first depth information and the second depth information are aligned in the main camera coordinate system, according to the first confidence, a low confidence region in the first depth information can be determined; then in the low confidence region , through the first depth information corresponding to the pixel to be judged and the second depth information corresponding to the effective neighborhood of the pixel to be judged, the difference between the two can be calculated; the difference is compared with the preset difference threshold; When the value is greater than the preset difference threshold, the to-be-determined pixel can be marked as an error point, and according to the marked error point, the error data area in the first depth information is obtained; when the difference is not greater than the preset difference threshold, Indicates that the first depth information corresponding to the pixel to be judged is correct. At this time, it is necessary to retain the first depth information corresponding to the pixel to be judged, and obtain the reserved data area in the first depth information, that is, the first depth information. The original data area in the first depth information is divided into the error data area and the original data area.
进一步地,在一些实施例中,该方法还可以包括:Further, in some embodiments, the method may also include:
S303f:若所述待判断像素的有效邻域内不存在所述第二深度信息相关联的有效像素点,则不执行所述将所述差值与预设差值阈值进行比较的步骤。S303f: If there is no valid pixel point associated with the second depth information in the valid neighborhood of the pixel to be determined, the step of comparing the difference with a preset difference threshold is not performed.
需要说明的是,由于TOF产生的初始深度信息对齐到主摄坐标系之后,所得到的第二深度信息中像素是稀疏的;这样,对于第二深度信息,在待判断像素的有效邻域内可能会不存在与第二深度信息相关联的有效像素点。也就是说,当待判断像素的有效邻域内不存在第二深度信息对应的有效像素点时,这时候将不能执行差值与预设差值阈值进行比较的步骤,可以保留该待判断像素对应的第一深度信息。It should be noted that since the initial depth information generated by TOF is aligned with the main camera coordinate system, the pixels in the obtained second depth information are sparse; thus, for the second depth information, the pixels to be judged may be within the effective neighborhood of the pixel. There may be no valid pixels associated with the second depth information. That is to say, when there is no valid pixel corresponding to the second depth information in the valid neighborhood of the pixel to be judged, the step of comparing the difference with the preset difference threshold cannot be performed at this time, and the corresponding pixel to be judged can be retained. The first depth information of .
进一步地,在一些实施例中,对于S304来说,所述通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,可以包括:Further, in some embodiments, for S304, performing correction processing on the erroneous data area by using the second depth information and the main color image to obtain target depth information may include:
S304a:针对所述错误数据区域中的每个错误点,通过每个错误点对应的第二深度信息对所述错误数据区域进行加权插值计算,并利用计算得到的深度信息替换所述第一深度信息中的错误数据区域,得到新的深度信息;S304a: For each error point in the error data area, perform weighted interpolation calculation on the error data area according to the second depth information corresponding to each error point, and replace the first depth with the calculated depth information The wrong data area in the information, get new depth information;
S304b:根据所述主彩色图像对所述新的深度信息进行滤波处理,得到所述目标深度信息。S304b: Perform filtering processing on the new depth information according to the main color image to obtain the target depth information.
需要说明的是,错误数据区域中所述包含的错误点,可以利用有效邻域内有效像素点对应的第二深度信息,同时还可以结合颜色相似性以及空间距离的权重,对错误数据区域进行加权插值计算,此时可以获得计算得到的深度信息;再利用计算得到的深度信息替换掉第一深度信息中的错误数据区域,可以得到新的深度信息;可见,该新的深度信息是由第一深度信息和第二深度信息进行融合得到的;为了减弱人工合成的痕迹,还可以以主彩色图像作为引导,对新的深度信息进行滤波,从而减弱了人工合成的痕迹,根据所输出的目标深度信息可以得到深度图像,也就实现了对该错误数据区域的校正处理。It should be noted that, for the error points contained in the error data area, the second depth information corresponding to the effective pixel points in the effective neighborhood can be used, and at the same time, the weight of the color similarity and the spatial distance can also be used to weight the error data area. Interpolation calculation, at this time, the calculated depth information can be obtained; then use the calculated depth information to replace the wrong data area in the first depth information, and new depth information can be obtained; it can be seen that the new depth information is determined by the first depth information. The depth information and the second depth information are fused; in order to reduce the traces of artificial synthesis, the main color image can also be used as a guide to filter the new depth information, thereby weakening the traces of artificial synthesis, according to the output target depth The information can obtain the depth image, which realizes the correction processing of the erroneous data area.
还需要说明的是,滤波处理方式包括引导滤波(Guide Filter)、DT滤波(DomainTransform Filter)、权重中值滤波(Weight Medium Filter)等方式;实际应用中,可以根据实际情况进行设置,本申请实施例不作具体限定。It should also be noted that the filtering processing methods include Guide Filter, DT Filter (DomainTransform Filter), and Weight Medium Filter (Weight Medium Filter). Examples are not specifically limited.
示例性地,参见图8,其示出了本申请实施例提供的一种深度图像的校正方法的详细流程示意图。如图8所示,终端设备通过TOF传感器采集到原始图像,可以由一组RAW图组成;终端设备通过主摄像头采集到主彩色图像,以及终端设备通过副摄像头采集到副彩色图像;然后对主彩色图像进行畸变校正处理,对副彩色图像进行畸变校正和极线校正处理,可以分别得到校正后的主彩色图像和校正后的副彩色图像;再利用双摄匹配算法对校正后的主彩色图像和校正后的副彩色图像进行匹配处理,可以得到第一深度信息和第一置信度;然后对原始图像进行深度信息计算,可以得到初始深度信息;由于初始深度信息是在TOF坐标系下,还需要将其转换到主摄坐标系下,从而得到第二深度信息,实现了第二深度信息和第一深度信息的像素对齐;根据第一置信度,确定出第一深度信息中的低置信度区域,然后在低置信度区域计算第一深度信息和第二深度信息之间的差值;再判断差值是否大于预设差值阈值,即对差值的安全性进行判断;当差值大于预设差值阈值时,表明了差值是不安全的,此时将该待判断像素标记为错误点,得到第一深度信息中的错误数据区域,然后利用第二深度信息对错误数据区域进行校正处理(比如修复、填充及插值处理等);当差值不大于预设差值阈值时,表明了差值是安全的,此时可以保留该待判断像素对应的第一深度信息;最后将两者进行融合,输出最终的深度图像。Exemplarily, see FIG. 8 , which shows a detailed schematic flowchart of a depth image correction method provided by an embodiment of the present application. As shown in Figure 8, the terminal device collects the original image through the TOF sensor, which can be composed of a set of RAW images; the terminal device collects the main color image through the main camera, and the terminal device collects the sub-color image through the sub-camera; The color image is subjected to distortion correction processing, and the secondary color image is subjected to distortion correction and epipolar correction processing to obtain the corrected primary color image and corrected secondary color image respectively; Perform matching processing with the corrected secondary color image to obtain the first depth information and the first confidence level; then perform depth information calculation on the original image to obtain the initial depth information; since the initial depth information is in the TOF coordinate system, it is also It needs to be converted into the main camera coordinate system, so as to obtain the second depth information, and realize the pixel alignment of the second depth information and the first depth information; according to the first confidence degree, determine the low confidence degree in the first depth information area, and then calculate the difference between the first depth information and the second depth information in the low confidence area; then judge whether the difference is greater than the preset difference threshold, that is, judge the safety of the difference; when the difference is greater than When the difference threshold is preset, it indicates that the difference is unsafe. At this time, the pixel to be judged is marked as an error point, and the error data area in the first depth information is obtained, and then the second depth information is used for the error data area. Correction processing (such as repair, filling and interpolation processing, etc.); when the difference is not greater than the preset difference threshold, it indicates that the difference is safe, and the first depth information corresponding to the pixel to be judged can be retained at this time; The two are fused to output the final depth image.
在获取到深度图像之后,还可以根据该深度图像对主彩色图像进行虚化处理,能够提高人像虚化的准确性。参见图9,其示出了本申请实施例提供的一种人像虚化效果的对比示意图。如图9所示,(a)和(b)均为背景为重复纹理区域,其中,(a)提供了双摄模式下的人像虚化效果,(b)提供了双摄模式+TOF模式下的人像虚化效果;由此可以看出,双摄模式+TOF模式下的人像虚化效果更好。After the depth image is acquired, the main color image can also be blurred according to the depth image, which can improve the accuracy of portrait blurring. Referring to FIG. 9 , it shows a comparative schematic diagram of a portrait blur effect provided by an embodiment of the present application. As shown in Figure 9, both (a) and (b) are backgrounds with repetitive texture areas, where (a) provides the portrait blur effect in the dual-camera mode, and (b) provides the dual-camera mode + TOF mode. It can be seen that the portrait blur effect in the dual camera mode + TOF mode is better.
本实施例提供了一种深度图像的校正方法,该方法应用于终端设备。本实施例对前述实施例的具体实现进行了详细阐述,从中可以看出,通过本实施例的技术方案,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而实现了TOF对双摄人像模式的优化,提升了双摄人像模式下depth的准确性;另外,该目标深度信息主要用于对主彩色图像的虚化处理,还可以优化人像虚化的准确性,提升了人像虚化的效果。This embodiment provides a depth image correction method, which is applied to a terminal device. This embodiment describes the specific implementation of the foregoing embodiments in detail, from which it can be seen that through the technical solution of this embodiment, the depth information in the dual-camera portrait mode can be repaired in the absence of texture, repeated texture, overexposure, underexposure, etc. The phenomenon that the depth of the area is wrong, thus realizing the optimization of the TOF for the dual-camera portrait mode, and improving the accuracy of the depth in the dual-camera portrait mode; in addition, the target depth information is mainly used for the blurring of the main color image. Optimize the accuracy of portrait blur and improve the effect of portrait blur.
在本申请的再一实施例中,由于TOF模式在室外的效果较差,导致第二深度信息存在有大量的空洞,不适应于执行本申请实施例的深度图像的校正方法。因此,在一些实施例中,在所述根据所述原始图像,确定所述目标对象对应的第二深度信息之后,所述方法还包括:In still another embodiment of the present application, since the effect of the TOF mode is poor outdoors, there are a large number of holes in the second depth information, which is not suitable for performing the depth image correction method of the embodiment of the present application. Therefore, in some embodiments, after determining the second depth information corresponding to the target object according to the original image, the method further includes:
根据所述原始图像,确定所述目标对象对应的第二置信度;其中,所述第二置信度用于表征所述第二深度信息的准确度;According to the original image, a second confidence level corresponding to the target object is determined; wherein, the second confidence level is used to characterize the accuracy of the second depth information;
基于所述第二置信度,确定所述第二深度信息中的空洞个数;determining the number of holes in the second depth information based on the second confidence;
若所述空洞个数大于预设空洞阈值,则不执行所述的深度图像的校正方法。If the number of holes is greater than the preset hole threshold, the method for correcting the depth image is not executed.
需要说明的是,由于TOF模式在室外的效果较差,这时候可以增加一个空洞的个数或者空洞率的判断。如果TOF模式下的第二深度信息存在有大量的空洞或者低置信度区域,这时候可以不执行本申请实施例所述的深度图像的校正方法(具体是指第一深度信息与第二深度信息的融合),仍然采用正常的双摄模式获取深度图像。具体地,假定预设空洞阈值是用于衡量空洞数量是否过多的一个判定值,那么可以根据第二置信度,确定出第二深度信息中的空洞个数;如果空洞个数大于预设空洞阈值,表明了空洞数量过多,此时可以不执行本申请实施例的深度图像的校正方法,仅采用正常的双摄模式来获取深度图像。It should be noted that since the effect of the TOF mode is poor outdoors, the number of holes or the judgment of the hole rate can be increased at this time. If there are a large number of holes or low confidence areas in the second depth information in the TOF mode, at this time, the depth image correction method (specifically, the first depth information and the second depth information) described in this embodiment of the present application may not be executed. Fusion), still using the normal dual-camera mode to obtain depth images. Specifically, assuming that the preset hole threshold is a judgment value used to measure whether the number of holes is too large, then the number of holes in the second depth information can be determined according to the second confidence level; if the number of holes is greater than the preset number of holes The threshold value indicates that the number of holes is too large. In this case, the depth image correction method of the embodiment of the present application may not be performed, and only the normal dual-camera mode is used to obtain the depth image.
另外,终端设备中TOF传感器所生成深度信息的分辨率较低,主要是用于校正相对大一些区域的depth错误;而对于镂空、深度层次非常丰富的场景,这没有太好的校正效果。因此,本申请实施例的深度图像的校正方法主要应用于双摄人像模式下的使用。In addition, the resolution of the depth information generated by the TOF sensor in the terminal device is low, which is mainly used to correct the depth error in relatively large areas; however, for scenes with hollow and very rich depth levels, this does not have a good correction effect. Therefore, the depth image correction method according to the embodiment of the present application is mainly applied to the use in the dual-camera portrait mode.
本实施例提供了一种深度图像的校正方法,该方法应用于终端设备。本实施例对前述实施例的具体实现进行了详细阐述,从中可以看出,通过本实施例的技术方案,能够修复双摄人像模式下深度信息在无纹理、重复纹理、过曝光、欠曝光等区域depth出错的现象,从而实现了TOF对双摄人像模式的优化,提升了双摄人像模式下depth的准确性;另外,该目标深度信息主要用于对主彩色图像的虚化处理,还可以优化人像虚化的准确性,提升了人像虚化的效果。This embodiment provides a depth image correction method, which is applied to a terminal device. This embodiment describes the specific implementation of the foregoing embodiments in detail, from which it can be seen that through the technical solution of this embodiment, the depth information in the dual-camera portrait mode can be repaired in the absence of texture, repeated texture, overexposure, underexposure, etc. The phenomenon that the depth of the area is wrong, thus realizing the optimization of the TOF for the dual-camera portrait mode, and improving the accuracy of the depth in the dual-camera portrait mode; in addition, the target depth information is mainly used for the blurring of the main color image. Optimize the accuracy of portrait blur and improve the effect of portrait blur.
基于前述实施例相同的发明构思,参见图10,其示出了本申请实施例提供的另一种终端设备100的组成结构示意图。如图10所示,终端设备100可以包括:获取单元1001、确定单元1002和校正单元1003,其中,Based on the same inventive concept of the foregoing embodiments, see FIG. 10 , which shows a schematic structural diagram of the composition of another
所述获取单元1001,配置为获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;The acquiring unit 1001 is configured to acquire the original image corresponding to the target object and the main color image and the sub-color image corresponding to the target object; wherein, the original image is obtained according to the acquisition of the target object by the TOF sensor, and the The main color image and the sub-color image are obtained according to the acquisition of the target object by the dual cameras;
所述确定单元1002,配置为根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;以及还配置为基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;The determining unit 1002 is configured to use a preset dual-camera algorithm to determine the first depth information and the first confidence level corresponding to the target object according to the primary color image and the secondary color image; according to the original image, determining second depth information corresponding to the target object; and further configured to determine an erroneous data region in the first depth information based on the first depth information, the second depth information and the first confidence level ;
所述校正单元1003,配置为通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像。The correction unit 1003 is configured to perform correction processing on the erroneous data area by using the second depth information and the main color image to obtain target depth information, and obtain a depth image according to the target depth information.
在上述方案中,参见图10,所述终端设备100还可以包括虚化单元1004,配置为根据所述深度图像对所述主彩色图像进行虚化处理,得到目标图像。In the above solution, referring to FIG. 10 , the
在上述方案中,参见图10,所述终端设备100还可以包括采集单元1005,配置为通过TOF传感器对所述目标对象进行采集,获取所述目标对象对应的原始图像;以及还配置为通过双摄像头对所述目标对象进行采集,获取所述目标对象在主摄像头下对应的主彩色图像以及所述目标对象在副摄像头下对应的副彩色图像;其中,所述双摄像头包括主摄像头和副摄像头。In the above solution, referring to FIG. 10 , the
在上述方案中,所述校正单元1003,还配置为对所述主彩色图像进行畸变校正处理,得到校正后的主彩色图像;以及对所述副彩色图像进行畸变校正和极线校正处理,得到校正后的副彩色图像;In the above solution, the correction unit 1003 is further configured to perform distortion correction processing on the primary color image to obtain a corrected primary color image; and perform distortion correction and epipolar correction processing on the secondary color image to obtain Corrected secondary color image;
所述确定单元1002,具体配置为针对所述目标对象中的每个像素,基于校正后的主彩色图像和校正后的副彩色图像,利用预设双摄算法确定每个像素对应的第一深度信息和每个像素对应的第一置信度;其中,所述第一置信度用于表征所述第一深度信息的准确度。The determining unit 1002 is specifically configured to, for each pixel in the target object, use a preset dual-camera algorithm to determine the first depth corresponding to each pixel based on the corrected primary color image and the corrected secondary color image. information and a first confidence level corresponding to each pixel; wherein, the first confidence level is used to represent the accuracy of the first depth information.
在上述方案中,参见图10,所述终端设备100还可以包括计算单元1006和转换单元1007,其中,In the above solution, referring to FIG. 10 , the
所述计算单元1006,配置为通过双摄匹配算法对校正后的主彩色图像和校正后的副彩色图像进行视差匹配计算,得到每个像素对应的视差值;The computing unit 1006 is configured to perform parallax matching calculation on the corrected primary color image and the corrected secondary color image through a dual-camera matching algorithm to obtain a parallax value corresponding to each pixel;
所述转换单元1007,配置为通过第一预设转换模型对所述视差值进行深度转换,得到每个像素对应的第一深度信息。The conversion unit 1007 is configured to perform depth conversion on the disparity value by using a first preset conversion model to obtain first depth information corresponding to each pixel.
在上述方案中,所述计算单元1006,还配置为对校正后的主彩色图像和校正后的副彩色图像进行匹配相似性计算,得到每个像素对应的匹配相似性代价;In the above solution, the calculation unit 1006 is further configured to perform matching similarity calculation on the corrected primary color image and the corrected secondary color image, to obtain the matching similarity cost corresponding to each pixel;
所述确定单元1002,还配置为基于所述匹配相似性代价,确定每个像素对应的第一置信度。The determining unit 1002 is further configured to determine a first confidence level corresponding to each pixel based on the matching similarity cost.
在上述方案中,所述计算单元1006,还配置为计算每个像素在校正后的主彩色图像下对应的第一纹理梯度;In the above solution, the calculating unit 1006 is further configured to calculate the first texture gradient corresponding to each pixel under the corrected main color image;
所述确定单元1002,还配置为基于所述第一纹理梯度,确定每个像素对应的第一置信度。The determining unit 1002 is further configured to determine a first confidence level corresponding to each pixel based on the first texture gradient.
在上述方案中,所述转换单元1007,还配置为根据所述原始图像,得到所述目标对象中每个像素在TOF坐标系下的初始深度信息;以及通过第二预设转换模型对所述初始深度信息进行坐标系转换,得到每个像素在主摄坐标系下的第二深度信息。In the above solution, the conversion unit 1007 is further configured to obtain the initial depth information of each pixel in the target object in the TOF coordinate system according to the original image; The initial depth information is converted into a coordinate system to obtain the second depth information of each pixel in the main camera coordinate system.
在上述方案中,所述计算单元1006,还配置为根据预设标定算法对TOF传感器和双摄像头之间进行标定,获得标定参数;In the above solution, the computing unit 1006 is further configured to perform calibration between the TOF sensor and the dual cameras according to a preset calibration algorithm to obtain calibration parameters;
所述转换单元1007,具体配置为基于所述标定参数以及第二预设转换模型,将所述初始深度信息转换到主摄坐标系中,得到每个像素在主摄坐标系下的第二深度信息。The conversion unit 1007 is specifically configured to convert the initial depth information into the main camera coordinate system based on the calibration parameters and the second preset conversion model to obtain the second depth of each pixel in the main camera coordinate system information.
在上述方案中,参见图10,所述终端设备100还可以包括判断单元1008,其中,In the above solution, referring to FIG. 10 , the
所述确定单元1002,还配置为根据所述第一置信度,确定所述第一深度信息中的低置信度区域;The determining unit 1002 is further configured to determine a low-confidence region in the first depth information according to the first confidence;
所述计算单元1006,还配置为针对所述低置信度区域中的每个待判断像素,计算每个待判断像素对应的第一深度信息与该待判断像素的有效邻域内对应的第二深度信息之间的差值;The calculating unit 1006 is further configured to, for each pixel to be judged in the low confidence region, calculate the first depth information corresponding to each pixel to be judged and the second depth corresponding to the effective neighborhood of the pixel to be judged the difference between the information;
所述判断单元1008,配置为将所述差值与预设差值阈值进行比较;以及当所述差值大于预设差值阈值时,将该待判断像素标记为错误点,根据标记的错误点,得到所述第一深度信息中的错误数据区域。The judging unit 1008 is configured to compare the difference with a preset difference threshold; and when the difference is greater than the preset difference threshold, mark the pixel to be judged as an error point, according to the marked error point to obtain the erroneous data area in the first depth information.
在上述方案中,所述判断单元1008,还配置为当所述差值不大于预设差值阈值时,保留该待判断像素对应的第一深度信息,得到所述第一深度信息中的保留数据区域。In the above solution, the judging unit 1008 is further configured to retain the first depth information corresponding to the pixel to be judged when the difference is not greater than a preset difference threshold, and obtain the reserved first depth information data area.
在上述方案中,所述判断单元1008,还配置为若所述待判断像素的有效邻域内不存在所述第二深度信息相关联的有效像素点,则不执行所述将所述差值与预设差值阈值进行比较的步骤。In the above solution, the judging unit 1008 is further configured to, if there is no valid pixel point associated with the second depth information in the valid neighborhood of the pixel to be judged, not to perform the step of comparing the difference value with The step of comparing with preset difference thresholds.
在上述方案中,所述校正单元1003,具体配置为针对所述错误数据区域中的每个错误点,通过每个错误点对应的第二深度信息对所述错误数据区域进行加权插值计算,并利用计算得到的深度信息替换所述第一深度信息中的错误数据区域,得到新的深度信息;以及根据所述主彩色图像对所述新的深度信息进行滤波处理,得到所述目标深度信息。In the above solution, the correcting unit 1003 is specifically configured to, for each error point in the error data region, perform a weighted interpolation calculation on the error data region according to the second depth information corresponding to each error point, and Replace the wrong data area in the first depth information with the calculated depth information to obtain new depth information; and perform filtering processing on the new depth information according to the main color image to obtain the target depth information.
在上述方案中,所述确定单元1002,还配置为根据所述原始图像,确定所述目标对象对应的第二置信度;其中,所述第二置信度用于表征所述第二深度信息的准确度;以及基于所述第二置信度,确定所述第二深度信息中的空洞个数;In the above solution, the determining unit 1002 is further configured to determine, according to the original image, a second confidence level corresponding to the target object; wherein the second confidence level is used to represent the depth of the second depth information. accuracy; and based on the second confidence, determining the number of holes in the second depth information;
所述判断单元1008,还配置为若所述空洞个数大于预设空洞阈值,则不执行所述的深度图像的校正方法。The judging unit 1008 is further configured to not execute the method for correcting the depth image if the number of holes is greater than a preset hole threshold.
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。It can be understood that, in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, it may also be a module, and it may also be non-modular. Moreover, each component in this embodiment may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of software function modules.
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of this embodiment is essentially or The part that contributes to the prior art or the whole or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium, and includes several instructions for making a computer device (which can be It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the method described in this embodiment. The aforementioned storage medium includes: U disk, removable hard disk, Read Only Memory (ROM), Random Access Memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
因此,本实施例提供了一种计算机存储介质,该计算机存储介质存储有深度图像的校正程序,所述深度图像的校正程序被至少一个处理器执行时实现前述实施例中任一项所述的方法。Therefore, the present embodiment provides a computer storage medium, where the computer storage medium stores a depth image correction program, and when the depth image correction program is executed by at least one processor, implements any one of the foregoing embodiments. method.
基于上述终端设备100的组成结构以及计算机存储介质,参见图11,其示出了本申请实施例提供的终端设备100的具体硬件结构,可以包括:通信接口1101、存储器1102和处理器1103;各个组件通过总线系统1104耦合在一起。可理解,总线系统1104用于实现这些组件之间的连接通信。总线系统1104除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图11中将各种总线都标为总线系统1104。其中,通信接口1101,用于在与其他外部设备之间进行收发信息过程中,信号的接收和发送;Based on the composition structure and computer storage medium of the above-mentioned
存储器1102,用于存储能够在处理器1103上运行的计算机程序;a
处理器1103,用于在运行所述计算机程序时,执行:The
获取目标对象对应的原始图像以及所述目标对象对应的主彩色图像和副彩色图像;其中,所述原始图像是根据TOF传感器对目标对象的采集得到的,所述主彩色图像和副彩色图像是根据双摄像头对目标对象的采集得到的;Obtain the original image corresponding to the target object and the main color image and the sub-color image corresponding to the target object; wherein, the original image is obtained according to the acquisition of the target object by the TOF sensor, and the main color image and the sub-color image are Obtained according to the acquisition of the target object by the dual cameras;
根据所述主彩色图像和所述副彩色图像,利用预设双摄算法确定所述目标对象对应的第一深度信息和第一置信度;根据所述原始图像,确定所述目标对象对应的第二深度信息;According to the primary color image and the secondary color image, the preset dual-camera algorithm is used to determine the first depth information and the first confidence level corresponding to the target object; according to the original image, the first depth information corresponding to the target object is determined. 2. depth information;
基于所述第一深度信息、所述第二深度信息以及所述第一置信度,确定所述第一深度信息中的错误数据区域;determining an erroneous data region in the first depth information based on the first depth information, the second depth information and the first confidence level;
通过所述第二深度信息以及所述主彩色图像对所述错误数据区域进行校正处理,得到目标深度信息,根据所述目标深度信息得到深度图像。Correction processing is performed on the erroneous data area by using the second depth information and the main color image to obtain target depth information, and a depth image is obtained according to the target depth information.
可以理解,本申请实施例中的存储器1102可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double DataRate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的系统和方法的存储器1102旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the
而处理器1103可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1103中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1103可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1102,处理器1103读取存储器1102中的信息,结合其硬件完成上述的方法。The
可以理解的是,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(ApplicationSpecific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable LogicDevice,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。It will be appreciated that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For hardware implementation, the processing unit may be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Device (DSP Device, DSPD), programmable logic Devices (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, microcontrollers, microprocessors, other electronic units for performing the functions described in this application or a combination thereof.
对于软件实现,可通过执行本文所述功能的模块(例如过程、函数等)来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。For a software implementation, the techniques described herein may be implemented through modules (eg, procedures, functions, etc.) that perform the functions described herein. Software codes may be stored in memory and executed by a processor. The memory can be implemented in the processor or external to the processor.
可选地,作为另一个实施例,处理器1103还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。Optionally, as another embodiment, the
可选地,作为另一个实施例,终端设备100可以包括应用处理器、主摄像头、副摄像头、红外线发射器和激光发射器;其中,应用处理器可以配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。Optionally, as another embodiment, the
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, in this application, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements , but also other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present application are only for description, and do not represent the advantages or disadvantages of the embodiments.
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。The methods disclosed in the several method embodiments provided in this application can be arbitrarily combined under the condition of no conflict to obtain new method embodiments.
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。The features disclosed in the several product embodiments provided in this application can be combined arbitrarily without conflict to obtain a new product embodiment.
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。The features disclosed in several method or device embodiments provided in this application can be combined arbitrarily without conflict to obtain new method embodiments or device embodiments.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. should be covered within the scope of protection of this application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910550733.5A CN110335211B (en) | 2019-06-24 | 2019-06-24 | Depth image correction method, terminal device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910550733.5A CN110335211B (en) | 2019-06-24 | 2019-06-24 | Depth image correction method, terminal device and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110335211A CN110335211A (en) | 2019-10-15 |
CN110335211B true CN110335211B (en) | 2021-07-30 |
Family
ID=68142681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910550733.5A Expired - Fee Related CN110335211B (en) | 2019-06-24 | 2019-06-24 | Depth image correction method, terminal device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110335211B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021087812A1 (en) * | 2019-11-06 | 2021-05-14 | Oppo广东移动通信有限公司 | Method for determining depth value of image, image processor and module |
CN114391259B (en) * | 2019-11-06 | 2024-05-31 | Oppo广东移动通信有限公司 | Information processing method, terminal device and storage medium |
CN110874852A (en) * | 2019-11-06 | 2020-03-10 | Oppo广东移动通信有限公司 | Method for determining depth image, image processor and storage medium |
CN112866674B (en) * | 2019-11-12 | 2022-10-25 | Oppo广东移动通信有限公司 | Depth map acquisition method and device, electronic equipment and computer readable storage medium |
CN112802078A (en) * | 2019-11-14 | 2021-05-14 | 北京三星通信技术研究有限公司 | Depth map generation method and device |
WO2021114061A1 (en) * | 2019-12-09 | 2021-06-17 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Electric device and method of controlling an electric device |
CN111239729B (en) * | 2020-01-17 | 2022-04-05 | 西安交通大学 | ToF depth sensor with fusion speckle and flood projection and its ranging method |
CN111325691B (en) * | 2020-02-20 | 2023-11-10 | Oppo广东移动通信有限公司 | Image correction method, apparatus, electronic device, and computer-readable storage medium |
CN111457886B (en) * | 2020-04-01 | 2022-06-21 | 北京迈格威科技有限公司 | Distance determination method, device and system |
CN111539899A (en) * | 2020-05-29 | 2020-08-14 | 深圳市商汤科技有限公司 | Image restoration method and related product |
CN113840130A (en) * | 2020-06-24 | 2021-12-24 | 中兴通讯股份有限公司 | Depth map generation method, device and storage medium |
CN111861962B (en) * | 2020-07-28 | 2021-07-30 | 湖北亿咖通科技有限公司 | Data fusion method and electronic equipment |
CN112085775B (en) * | 2020-09-17 | 2024-05-24 | 北京字节跳动网络技术有限公司 | Image processing method, device, terminal and storage medium |
CN112911091B (en) * | 2021-03-23 | 2023-02-24 | 维沃移动通信(杭州)有限公司 | Parameter adjusting method and device of multipoint laser and electronic equipment |
CN113301320B (en) * | 2021-04-07 | 2022-11-04 | 维沃移动通信(杭州)有限公司 | Image information processing method and device and electronic equipment |
CN115731281A (en) * | 2021-08-30 | 2023-03-03 | 株式会社摩如富 | Depth estimation method, depth estimation device |
CN114119696A (en) * | 2021-11-30 | 2022-03-01 | 上海商汤临港智能科技有限公司 | Method, device and system for acquiring depth image and computer readable storage medium |
CN116055852B (en) * | 2023-01-18 | 2024-12-13 | 深圳大方智能科技有限公司 | A method for selective imaging of images using a depth color camera |
CN115994937A (en) * | 2023-03-22 | 2023-04-21 | 科大讯飞股份有限公司 | Depth estimation method and device and robot |
CN116990830B (en) * | 2023-09-27 | 2023-12-29 | 锐驰激光(深圳)有限公司 | Distance positioning method and device based on binocular and TOF, electronic equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609941A (en) * | 2012-01-31 | 2012-07-25 | 北京航空航天大学 | Three-dimensional registering method based on ToF (Time-of-Flight) depth camera |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background virtualization method and device based on depth of field and electronic device |
CN109300151A (en) * | 2018-07-02 | 2019-02-01 | 浙江商汤科技开发有限公司 | Image processing method and device, electronic equipment |
CN109615652A (en) * | 2018-10-23 | 2019-04-12 | 西安交通大学 | A method and device for acquiring depth information |
CN109640066A (en) * | 2018-12-12 | 2019-04-16 | 深圳先进技术研究院 | The generation method and device of high-precision dense depth image |
-
2019
- 2019-06-24 CN CN201910550733.5A patent/CN110335211B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609941A (en) * | 2012-01-31 | 2012-07-25 | 北京航空航天大学 | Three-dimensional registering method based on ToF (Time-of-Flight) depth camera |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background virtualization method and device based on depth of field and electronic device |
CN109300151A (en) * | 2018-07-02 | 2019-02-01 | 浙江商汤科技开发有限公司 | Image processing method and device, electronic equipment |
CN109615652A (en) * | 2018-10-23 | 2019-04-12 | 西安交通大学 | A method and device for acquiring depth information |
CN109640066A (en) * | 2018-12-12 | 2019-04-16 | 深圳先进技术研究院 | The generation method and device of high-precision dense depth image |
Also Published As
Publication number | Publication date |
---|---|
CN110335211A (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110335211B (en) | Depth image correction method, terminal device and computer storage medium | |
US8786679B2 (en) | Imaging device, 3D modeling data creation method, and computer-readable recording medium storing programs | |
KR101657039B1 (en) | Image processing apparatus, image processing method, and imaging system | |
CN107948519B (en) | Image processing method, device and equipment | |
JP6946188B2 (en) | Methods and equipment for multi-technology depth map acquisition and fusion | |
CN110390719B (en) | Reconstruction equipment based on flight time point cloud | |
CN108028887B (en) | Photographing focusing method, device and equipment for terminal | |
CN108702437B (en) | Method, system, device and storage medium for calculating depth map | |
CN107945105B (en) | Background blur processing method, device and equipment | |
CN109640066B (en) | Method and device for generating high-precision dense depth image | |
WO2020038255A1 (en) | Image processing method, electronic apparatus, and computer-readable storage medium | |
TWI709110B (en) | Camera calibration method and apparatus, electronic device | |
KR20160090373A (en) | Photographing method for dual-camera device and dual-camera device | |
WO2019105261A1 (en) | Background blurring method and apparatus, and device | |
CN110336942B (en) | Blurred image acquisition method, terminal and computer-readable storage medium | |
JP6452360B2 (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
US11523056B2 (en) | Panoramic photographing method and device, camera and mobile terminal | |
CN112004029B (en) | Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium | |
CN109040745A (en) | Camera self-calibration method and device, electronic equipment and computer storage medium | |
CN111882655A (en) | Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction | |
CN106934828A (en) | Depth image processing method and depth image processing system | |
CN109040746A (en) | Camera calibration method and apparatus, electronic equipment, computer readable storage medium | |
CN109257540B (en) | Photographing correction method of multi-photographing lens group and photographing device | |
CN108322726A (en) | A kind of Atomatic focusing method based on dual camera | |
JP5796611B2 (en) | Image processing apparatus, image processing method, program, and imaging system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210730 |