CN106780588A - A kind of image depth estimation method based on sparse laser observations - Google Patents
A kind of image depth estimation method based on sparse laser observations Download PDFInfo
- Publication number
- CN106780588A CN106780588A CN201611126056.7A CN201611126056A CN106780588A CN 106780588 A CN106780588 A CN 106780588A CN 201611126056 A CN201611126056 A CN 201611126056A CN 106780588 A CN106780588 A CN 106780588A
- Authority
- CN
- China
- Prior art keywords
- depth
- map
- laser
- depth map
- estimation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
本方法公开了一种基于稀疏激光观测的图像深度估计方法,该方法提出利用单线或多线激光的稀疏观测实现基于单目图像的深度稠密重构。通过构造参考深度图与残差深度图的方式训练深度神经网络,对稀疏的部分观测深度信息进行了充分利用。本发明相比于仅用单目图像进行深度估计的方法,该方法体现了明显优势。
The method discloses an image depth estimation method based on sparse laser observation, and the method proposes to use the sparse observation of single-line or multi-line laser to realize the depth-dense reconstruction based on monocular image. The deep neural network is trained by constructing the reference depth map and the residual depth map, which makes full use of the sparse partial observation depth information. Compared with the method of depth estimation using only monocular images, the method of the present invention has obvious advantages.
Description
技术领域technical field
本发明涉及场景深度估计领域,尤其涉及一种基于单目图像和稀疏激光的场景稠密深度估计方法。The invention relates to the field of scene depth estimation, in particular to a scene dense depth estimation method based on monocular images and sparse lasers.
背景技术Background technique
人类基于丰富的经验和不断地学习,从单目图像也具有估计图像中物体远近的能力,即一定程度上的深度估计能力。近年来,机器学习方法也在模仿人类这一深度估计能力上取得了显著进展,其中尤以数据驱动的深度学习技术表现突出。这一技术避免了手工特征设计过程,基于原始的单目RGB图像学习特征,并输出对于图像对应深度的预测。Based on rich experience and continuous learning, human beings also have the ability to estimate the distance of objects in the image from monocular images, that is, the ability to estimate depth to a certain extent. In recent years, machine learning methods have also made remarkable progress in imitating the depth estimation ability of humans, especially data-driven deep learning techniques. This technique avoids the manual feature design process, learns features based on the original monocular RGB image, and outputs a prediction of the corresponding depth of the image.
Eigen等人首次提出了基于深度学习的单目深度估计,他们构造了一个两阶段的深度估计网络,第一个阶段生成粗略估计并在第二阶段进行精细微调。随后,他们将该工作扩展为同时估计场景深度、深度法向量以及场景语义,并验证了同时估计深度法向量以及语义信息有助于场景深度估计性能提升。Liu等人探讨了结合深度学习与条件随机场的深度估计,对图像进行超像素分割,并对所有超像素构造条件随机场进行优化。Li和Wang分别在其上进行了扩展,通过分层的条件随机场逐层从超像素层面向像素层面优化。Eigen et al. first proposed deep learning-based monocular depth estimation. They constructed a two-stage depth estimation network. The first stage generates a rough estimate and the second stage performs fine-tuning. Subsequently, they extended this work to simultaneously estimate scene depth, depth normal vectors, and scene semantics, and verified that estimating depth normal vectors and semantic information simultaneously can help improve the performance of scene depth estimation. Liu et al. discussed the depth estimation combined with deep learning and conditional random fields, performed superpixel segmentation on images, and optimized conditional random fields for all superpixels. Li and Wang respectively extended it, and optimized it layer by layer from the superpixel level to the pixel level through the layered conditional random field.
尽管这些方法验证了从单目图像估计深度的可能,实际上单目图像本身是尺度信息缺失的。Eigen等人也提到,基于单目图像的深度估计可能存在一个全局的偏差。Although these methods demonstrate the possibility of estimating depth from monocular images, in fact the monocular images themselves are scale-informed. Eigen et al. also mentioned that there may be a global bias in depth estimation based on monocular images.
发明内容Contents of the invention
本发明的目的在于结合稀疏的单线激光信息估计图像稠密深度,以减小场景深度估计全局偏差,获得可信度更高的场景深度估计。The purpose of the present invention is to combine sparse single-line laser information to estimate the dense depth of the image, so as to reduce the global deviation of scene depth estimation and obtain scene depth estimation with higher reliability.
为实现上述目的,本发明基于深度学习方法,以单目图像和稀疏单线激光为输入,自主学习特征并得到稠密深度估计,训练过程的具体步骤如下:In order to achieve the above purpose, the present invention is based on a deep learning method, using monocular images and sparse single-line lasers as input, autonomously learning features and obtaining dense depth estimation. The specific steps of the training process are as follows:
一种基于稀疏激光观测的深度图像估计方法,其特征在于它包括如下步骤:A depth image estimation method based on sparse laser observation is characterized in that it comprises the following steps:
步骤一,为将稀疏单线激光信息稠密化,所述稀疏激光包括单线激光和多线激光,其中以稀疏激光中的单线激光构造参考深度图与残差深度图,在三维空间中对单线激光中的每个激光点以垂直地面的方向进行拉伸,得到一个与地面垂直的参考深度面;根据单目相机与单线激光的校准信息,将三维空间中得到的参考深度平面投影到单目相机获取图像的像平面上,得到一个与所述图像对应的参考深度图,将通过深度传感器获取的绝对深度图与参考深度图做差,得到残差深度图;Step 1, in order to densify the sparse single-line laser information, the sparse laser includes single-line laser and multi-line laser, wherein the single-line laser in the sparse laser is used to construct the reference depth map and the residual depth map, and the single-line laser in the three-dimensional space Each laser point is stretched in the direction perpendicular to the ground to obtain a reference depth plane perpendicular to the ground; according to the calibration information of the monocular camera and the single-line laser, the reference depth plane obtained in the three-dimensional space is projected to the monocular camera to obtain On the image plane of the image, a reference depth map corresponding to the image is obtained, and the absolute depth map obtained by the depth sensor is compared with the reference depth map to obtain a residual depth map;
步骤二,将单目相机获取的单目图像以及按步骤一所述得到的参考深度图作为训练数据,训练卷机神经网络估计对应的残差深度图;Step 2: Use the monocular image acquired by the monocular camera and the reference depth map obtained as described in step 1 as training data, and train the machine neural network to estimate the corresponding residual depth map;
步骤三,将卷机神经网络估计的残差深度图与参考深度图相加,得到估计的绝对深度图,称为绝对深度估计图,并在此基础上进一步构造优化的卷机神经网络,缩小该绝对深度估计图与深度传感器获得的绝对深度图之间的差异;该优化的卷机神经网络与步骤二所述用于估计残差深度的卷机神经网络可以叠加在一起,进行端到端优化,即输入单目图像与参考深度图,输出得到经过优化的绝对深度估计图。Step 3: Add the residual depth map estimated by the convolution neural network to the reference depth map to obtain the estimated absolute depth map, which is called the absolute depth estimation map, and further construct the optimized convolution neural network on this basis to shrink The difference between the absolute depth estimation map and the absolute depth map obtained by the depth sensor; the optimized convolutional neural network and the convolutional neural network described in step 2 for estimating the residual depth can be superimposed together for end-to-end Optimization, that is, input a monocular image and a reference depth map, and output an optimized absolute depth estimation map.
在上述技术方案的基础上,本发明还可以采用一下进一步的技术方案:On the basis of above-mentioned technical scheme, the present invention can also adopt following further technical scheme:
将深度神经网络端到端输出得到的绝对深度估计图与稀疏激光深度图通过条件随机场进行融合,从而确认在绝对深度估计图中有单线激光观测的位置其深度值与激光观测的深度值是一致的。The absolute depth estimation map obtained by the end-to-end output of the deep neural network is fused with the sparse laser depth map through the conditional random field, so as to confirm that there is a single-line laser observation in the absolute depth estimation map. The depth value and the depth value of the laser observation are consistent.
步骤二中,训练卷机神经网络估计对应的残差深度图方式如下:将待拟合的深度残差图上的每个像素点的残差深度的值离散化到数个自然数数值上,以分类形式实现对残差深度的深度估计。In step 2, the method of training the convolutional neural network to estimate the corresponding residual depth map is as follows: the value of the residual depth of each pixel on the depth residual map to be fitted is discretized to several natural number values, and The categorical form implements depth estimation of the residual depth.
由于采用本发明的技术方案,本发明的有益效果为:本发明能结合部分稀疏的真实深度观测,如单线激光雷达,而获得更为精准的深度估计,本发明能够减小场景深度估计全局偏差,获得可信度更高的场景深度估计。Due to the adoption of the technical solution of the present invention, the beneficial effects of the present invention are: the present invention can combine some sparse real depth observations, such as single-line laser radar, to obtain more accurate depth estimation, and the present invention can reduce the global deviation of scene depth estimation , to obtain a more reliable scene depth estimate.
附图说明Description of drawings
图1a为输入单目图像;Figure 1a is the input monocular image;
图1b为期望估计的深度图像示例;Figure 1b is an example of a desired estimated depth image;
图2a为稀疏激光观测;Figure 2a is sparse laser observation;
图2b为参考深度图以;Figure 2b is a reference depth map;
图2c残差深度图示例;Figure 2c Example of residual depth map;
图3a为深度图像真实;Figure 3a is the real depth image;
图3b为优化前深度估计;Figure 3b is the depth estimation before optimization;
图3c为优化后深度估计。Figure 3c shows the optimized depth estimation.
具体实施方式detailed description
为了更好的理解本发明的技术方案,以下结合附图作进一步描述。图1展示了深度估计的例子,输入为图1a所示的单目图像,要求估计图1b所示的场景深度。In order to better understand the technical solution of the present invention, further description will be made below in conjunction with the accompanying drawings. Figure 1 shows an example of depth estimation. The input is the monocular image shown in Figure 1a, and it is required to estimate the depth of the scene shown in Figure 1b.
步骤一,基于单线激光构造参考深度图与残差深度图。图2a展示了在图1中已知的单线激光信息,可见单线激光信息是十分稀疏且有限的。为将稀疏单线激光信息稠密化,在三维空间中对每个激光点以垂直地面的方向进行拉伸,得到一个与地面垂直的参考深度面。根据单目相机与单线激光的校准信息,将三维上得到的参考深度平面对应绘制到图像上,得到一个与图像对应的稠密参考深度图,如图2b所示。将真实深度图与参考深度图做差,得到残差深度图,如图2c所示。Step 1: Construct a reference depth map and a residual depth map based on the single-line laser. Figure 2a shows the known single-line laser information in Figure 1, it can be seen that the single-line laser information is very sparse and limited. In order to densify the sparse single-line laser information, each laser point is stretched in the direction perpendicular to the ground in three-dimensional space to obtain a reference depth plane perpendicular to the ground. According to the calibration information of the monocular camera and the single-line laser, the reference depth plane obtained in three dimensions is correspondingly drawn on the image, and a dense reference depth map corresponding to the image is obtained, as shown in Figure 2b. The difference between the real depth map and the reference depth map is obtained to obtain the residual depth map, as shown in Figure 2c.
步骤二,基于深度学习,以单目图像和参考深度图为输入,拟合残差深度图。将每个像素的残差深度值离散化到数个整数值上,以分类形式实现深度估计。构造全卷积形式的深度神经网络,实现每个像素上的深度值类别估计。为获得更好的拟合性能和更大的容量,采用He等人提出的50层Deep Residual Network,并以其在ImageNet上训练得到的网络作为初值进行训练。Step 2. Based on deep learning, the residual depth map is fitted with the monocular image and the reference depth map as input. Discretize the residual depth value of each pixel to several integer values to achieve depth estimation in a classification form. A deep neural network in the form of full convolution is constructed to realize the estimation of the depth value category on each pixel. In order to obtain better fitting performance and larger capacity, the 50-layer Deep Residual Network proposed by He et al. is used, and the network trained on ImageNet is used as the initial value for training.
步骤三,将网络估计的残差深度图与参考深度图相加,得到估计的真实深度图,并在此基础上进一步构造优化网络,缩小该估计的真实深度图与实际真实深度图之间的差异。该优化网络可以与残差估计网络叠加在一起,进行端到端优化。图2展示了深度真值、优化前深度估计与优化后深度估计的比较。Step 3: Add the residual depth map estimated by the network to the reference depth map to obtain the estimated real depth map, and further construct an optimization network on this basis to reduce the gap between the estimated real depth map and the actual real depth map. difference. The optimized network can be superimposed with the residual estimation network for end-to-end optimization. Figure 2 shows the comparison of depth ground truth, pre-optimization depth estimation and post-optimization depth estimation.
本发明通过在NYUD2数据集上进行实验验证方法的有效性。NYUD2是一个室内RGB-D数据集,本方法在RGB-D数据上模拟生成单线激光数据。本发明的主要优势在与参考深度图与残差深度图的生成与估计。因此,实验比较了相同神经网络结构下,仅采用RGB作为输入预测深度估计真实深度(方案一),采用RGB与参考深度图作为输入估计真实深度(方案二),以及采用RGB与参考深度图估计残差深度图再进一步得到真实深度图(方案三)的结果,并和当前世界领先的基于单目图像的深度估计方法进行比较,具体比较结果如表1所示。The present invention verifies the effectiveness of the method by performing experiments on the NYUD2 data set. NYUD2 is an indoor RGB-D dataset. This method simulates and generates single-line laser data on RGB-D data. The main advantage of the present invention lies in the generation and estimation of the reference depth map and the residual depth map. Therefore, the experiment compares under the same neural network structure, only using RGB as input to predict the depth to estimate the real depth (Scheme 1), using RGB and reference depth map as input to estimate the real depth (Scheme 2), and using RGB and reference depth map to estimate The residual depth map further obtains the result of the real depth map (Scheme 3), and compares it with the current world-leading depth estimation method based on monocular images. The specific comparison results are shown in Table 1.
为全方面评估深度估计效果,表1采用6个度量指标。令每个像素点的深度估计值为真实深度值y,T为所有像素点的合集,6个度量指标分别如下:In order to comprehensively evaluate the effect of depth estimation, Table 1 uses six metrics. Let the estimated depth of each pixel be The real depth value y, T is the collection of all pixels, and the six metrics are as follows:
1.绝对相对误差(rel), 1. Absolute relative error (rel),
2.平均Log误差(log10), 2. Average Log error (log10),
3.均方平均误差(rms), 3. Mean square mean error (rms),
4.三个阈值准确率(δi),满足条件的所占4. Three threshold accuracy rates (δ i ), satisfying the conditions of Occupy
所有像素点的比例。The ratio of all pixels.
从表1结果可见,激光稠密化后直接作为输入可以对深度估计的性能进行一定提升,而通过残差估计加上后续优化的方式则可以进一步提升深度估计性能。相比于其余世界领先的单目图像深度估计算法,本方法在各项指标上都有明显优势。It can be seen from the results in Table 1 that the performance of depth estimation can be improved to a certain extent after laser densification as input directly, and the performance of depth estimation can be further improved by residual estimation and subsequent optimization. Compared with other world-leading monocular image depth estimation algorithms, this method has obvious advantages in various indicators.
表1 NYUD2数据集深度估计对比。Table 1 Comparison of depth estimation of NYUD2 dataset.
上述实施例是对本发明的说明,不是对本发明的限定,任何对本发明简单变换后的方案均属于本发明的保护范围。The above-mentioned embodiment is an illustration of the present invention, not a limitation of the present invention, and any solution after a simple transformation of the present invention belongs to the protection scope of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611126056.7A CN106780588A (en) | 2016-12-09 | 2016-12-09 | A kind of image depth estimation method based on sparse laser observations |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611126056.7A CN106780588A (en) | 2016-12-09 | 2016-12-09 | A kind of image depth estimation method based on sparse laser observations |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106780588A true CN106780588A (en) | 2017-05-31 |
Family
ID=58877585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611126056.7A Pending CN106780588A (en) | 2016-12-09 | 2016-12-09 | A kind of image depth estimation method based on sparse laser observations |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106780588A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992848A (en) * | 2017-12-19 | 2018-05-04 | 北京小米移动软件有限公司 | Obtain the method, apparatus and computer-readable recording medium of depth image |
CN108416840A (en) * | 2018-03-14 | 2018-08-17 | 大连理工大学 | A Dense Reconstruction Method of 3D Scene Based on Monocular Camera |
CN108489496A (en) * | 2018-04-28 | 2018-09-04 | 北京空间飞行器总体设计部 | Noncooperative target Relative Navigation method for estimating based on Multi-source Information Fusion and system |
CN108510535A (en) * | 2018-03-14 | 2018-09-07 | 大连理工大学 | A High-Quality Depth Estimation Method Based on Depth Prediction and Enhanced Subnetwork |
CN109035319A (en) * | 2018-07-27 | 2018-12-18 | 深圳市商汤科技有限公司 | Monocular image depth estimation method and device, equipment, program and storage medium |
CN109087349A (en) * | 2018-07-18 | 2018-12-25 | 亮风台(上海)信息科技有限公司 | A kind of monocular depth estimation method, device, terminal and storage medium |
CN109146944A (en) * | 2018-10-30 | 2019-01-04 | 浙江科技学院 | A kind of space or depth perception estimation method based on the revoluble long-pending neural network of depth |
CN109300151A (en) * | 2018-07-02 | 2019-02-01 | 浙江商汤科技开发有限公司 | Image processing method and device, electronic equipment |
CN109325972A (en) * | 2018-07-25 | 2019-02-12 | 深圳市商汤科技有限公司 | Processing method, device, equipment and the medium of laser radar sparse depth figure |
CN109461178A (en) * | 2018-09-10 | 2019-03-12 | 中国科学院自动化研究所 | A kind of monocular image depth estimation method and device merging sparse known label |
CN110232361A (en) * | 2019-06-18 | 2019-09-13 | 中国科学院合肥物质科学研究院 | Human body behavior intension recognizing method and system based on the dense network of three-dimensional residual error |
CN110428462A (en) * | 2019-07-17 | 2019-11-08 | 清华大学 | Polyphaser solid matching method and device |
CN110992271A (en) * | 2020-03-04 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Image processing method, path planning method, device, equipment and storage medium |
CN111062981A (en) * | 2019-12-13 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN111680554A (en) * | 2020-04-29 | 2020-09-18 | 北京三快在线科技有限公司 | Depth estimation method and device for automatic driving scene and autonomous vehicle |
US10810754B2 (en) | 2018-04-24 | 2020-10-20 | Ford Global Technologies, Llc | Simultaneous localization and mapping constraints in generative adversarial networks for monocular depth estimation |
CN112712017A (en) * | 2020-12-29 | 2021-04-27 | 上海智蕙林医疗科技有限公司 | Robot, monocular depth estimation method and system and storage medium |
CN113034562A (en) * | 2019-12-09 | 2021-06-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for optimizing depth information |
CN113219475A (en) * | 2021-07-06 | 2021-08-06 | 北京理工大学 | Method and system for correcting monocular distance measurement by using single line laser radar |
CN114600151A (en) * | 2019-10-24 | 2022-06-07 | 华为技术有限公司 | Domain adaptation for deep densification |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103247075A (en) * | 2013-05-13 | 2013-08-14 | 北京工业大学 | Variational mechanism-based indoor scene three-dimensional reconstruction method |
CN104346608A (en) * | 2013-07-26 | 2015-02-11 | 株式会社理光 | Sparse depth map densing method and device |
CN106157307A (en) * | 2016-06-27 | 2016-11-23 | 浙江工商大学 | A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF |
-
2016
- 2016-12-09 CN CN201611126056.7A patent/CN106780588A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103247075A (en) * | 2013-05-13 | 2013-08-14 | 北京工业大学 | Variational mechanism-based indoor scene three-dimensional reconstruction method |
CN104346608A (en) * | 2013-07-26 | 2015-02-11 | 株式会社理光 | Sparse depth map densing method and device |
CN106157307A (en) * | 2016-06-27 | 2016-11-23 | 浙江工商大学 | A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF |
Non-Patent Citations (2)
Title |
---|
FAYAO LIU 等: "Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
YIYI LIAO 等: "Parse Geometry from a Line: Monocular Depth Estimation with Partial Laser Observation", 《ARXIV:1611.02174V1 [CS.CV]》 * |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107992848B (en) * | 2017-12-19 | 2020-09-25 | 北京小米移动软件有限公司 | Method and device for acquiring depth image and computer readable storage medium |
CN107992848A (en) * | 2017-12-19 | 2018-05-04 | 北京小米移动软件有限公司 | Obtain the method, apparatus and computer-readable recording medium of depth image |
CN108416840B (en) * | 2018-03-14 | 2020-02-18 | 大连理工大学 | A 3D scene dense reconstruction method based on monocular camera |
CN108416840A (en) * | 2018-03-14 | 2018-08-17 | 大连理工大学 | A Dense Reconstruction Method of 3D Scene Based on Monocular Camera |
CN108510535A (en) * | 2018-03-14 | 2018-09-07 | 大连理工大学 | A High-Quality Depth Estimation Method Based on Depth Prediction and Enhanced Subnetwork |
CN108510535B (en) * | 2018-03-14 | 2020-04-24 | 大连理工大学 | High-quality depth estimation method based on depth prediction and enhancer network |
US10810754B2 (en) | 2018-04-24 | 2020-10-20 | Ford Global Technologies, Llc | Simultaneous localization and mapping constraints in generative adversarial networks for monocular depth estimation |
CN108489496A (en) * | 2018-04-28 | 2018-09-04 | 北京空间飞行器总体设计部 | Noncooperative target Relative Navigation method for estimating based on Multi-source Information Fusion and system |
CN109300151A (en) * | 2018-07-02 | 2019-02-01 | 浙江商汤科技开发有限公司 | Image processing method and device, electronic equipment |
CN109300151B (en) * | 2018-07-02 | 2021-02-12 | 浙江商汤科技开发有限公司 | Image processing method and device and electronic equipment |
CN109087349A (en) * | 2018-07-18 | 2018-12-25 | 亮风台(上海)信息科技有限公司 | A kind of monocular depth estimation method, device, terminal and storage medium |
CN109087349B (en) * | 2018-07-18 | 2021-01-26 | 亮风台(上海)信息科技有限公司 | A monocular depth estimation method, device, terminal and storage medium |
CN109325972B (en) * | 2018-07-25 | 2020-10-27 | 深圳市商汤科技有限公司 | Laser radar sparse depth map processing method, device, equipment and medium |
CN109325972A (en) * | 2018-07-25 | 2019-02-12 | 深圳市商汤科技有限公司 | Processing method, device, equipment and the medium of laser radar sparse depth figure |
TWI766175B (en) * | 2018-07-27 | 2022-06-01 | 大陸商深圳市商湯科技有限公司 | Method, device and apparatus for monocular image depth estimation, program and storage medium thereof |
JP2021500689A (en) * | 2018-07-27 | 2021-01-07 | 深▲せん▼市商▲湯▼科技有限公司Shenzhen Sensetime Technology Co., Ltd. | Monocular image depth estimation method and equipment, equipment, programs and storage media |
CN109035319B (en) * | 2018-07-27 | 2021-04-30 | 深圳市商汤科技有限公司 | Monocular image depth estimation method, monocular image depth estimation device, monocular image depth estimation apparatus, monocular image depth estimation program, and storage medium |
KR20200044108A (en) * | 2018-07-27 | 2020-04-28 | 선전 센스타임 테크놀로지 컴퍼니 리미티드 | Method and apparatus for estimating monocular image depth, device, program and storage medium |
KR102292559B1 (en) * | 2018-07-27 | 2021-08-24 | 선전 센스타임 테크놀로지 컴퍼니 리미티드 | Monocular image depth estimation method and apparatus, apparatus, program and storage medium |
WO2020019761A1 (en) * | 2018-07-27 | 2020-01-30 | 深圳市商汤科技有限公司 | Monocular image depth estimation method and apparatus, device, program and storage medium |
CN109035319A (en) * | 2018-07-27 | 2018-12-18 | 深圳市商汤科技有限公司 | Monocular image depth estimation method and device, equipment, program and storage medium |
US11443445B2 (en) | 2018-07-27 | 2022-09-13 | Shenzhen Sensetime Technology Co., Ltd. | Method and apparatus for depth estimation of monocular image, and storage medium |
CN109461178A (en) * | 2018-09-10 | 2019-03-12 | 中国科学院自动化研究所 | A kind of monocular image depth estimation method and device merging sparse known label |
CN109146944A (en) * | 2018-10-30 | 2019-01-04 | 浙江科技学院 | A kind of space or depth perception estimation method based on the revoluble long-pending neural network of depth |
CN109146944B (en) * | 2018-10-30 | 2020-06-26 | 浙江科技学院 | Visual depth estimation method based on depth separable convolutional neural network |
CN110232361A (en) * | 2019-06-18 | 2019-09-13 | 中国科学院合肥物质科学研究院 | Human body behavior intension recognizing method and system based on the dense network of three-dimensional residual error |
CN110232361B (en) * | 2019-06-18 | 2021-04-02 | 中国科学院合肥物质科学研究院 | Human behavior intention identification method and system based on three-dimensional residual dense network |
CN110428462B (en) * | 2019-07-17 | 2022-04-08 | 清华大学 | Multi-camera stereo matching method and device |
CN110428462A (en) * | 2019-07-17 | 2019-11-08 | 清华大学 | Polyphaser solid matching method and device |
CN114600151A (en) * | 2019-10-24 | 2022-06-07 | 华为技术有限公司 | Domain adaptation for deep densification |
CN113034562A (en) * | 2019-12-09 | 2021-06-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for optimizing depth information |
CN113034562B (en) * | 2019-12-09 | 2023-05-12 | 百度在线网络技术(北京)有限公司 | Method and apparatus for optimizing depth information |
CN111062981A (en) * | 2019-12-13 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN111062981B (en) * | 2019-12-13 | 2023-05-05 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN110992271B (en) * | 2020-03-04 | 2020-07-07 | 腾讯科技(深圳)有限公司 | Image processing method, path planning method, device, equipment and storage medium |
CN110992271A (en) * | 2020-03-04 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Image processing method, path planning method, device, equipment and storage medium |
CN111680554A (en) * | 2020-04-29 | 2020-09-18 | 北京三快在线科技有限公司 | Depth estimation method and device for automatic driving scene and autonomous vehicle |
CN112712017A (en) * | 2020-12-29 | 2021-04-27 | 上海智蕙林医疗科技有限公司 | Robot, monocular depth estimation method and system and storage medium |
CN113219475A (en) * | 2021-07-06 | 2021-08-06 | 北京理工大学 | Method and system for correcting monocular distance measurement by using single line laser radar |
CN113219475B (en) * | 2021-07-06 | 2021-10-22 | 北京理工大学 | Method and system for correcting monocular ranging by using single-line lidar |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106780588A (en) | A kind of image depth estimation method based on sparse laser observations | |
CN109784333B (en) | Three-dimensional target detection method and system based on point cloud weighted channel characteristics | |
US12033081B2 (en) | Systems and methods for virtual and augmented reality | |
Schneider et al. | Semantic stixels: Depth is not enough | |
CN108596974B (en) | Dynamic scene robot positioning and mapping system and method | |
CN109064514B (en) | A 6-DOF Pose Estimation Method Based on Projected Point Coordinate Regression | |
CN106600583B (en) | Parallax picture capturing method based on end-to-end neural network | |
CN110637305A (en) | Learn to reconstruct 3D shapes by rendering many 3D views | |
Yang et al. | MPED: Quantifying point cloud distortion based on multiscale potential energy discrepancy | |
CN107103285B (en) | Face depth prediction method based on convolutional neural network | |
CN110728707B (en) | Multi-view depth prediction method based on asymmetric deep convolutional neural network | |
CN111489394B (en) | Object attitude estimation model training method, system, device and medium | |
US11948310B2 (en) | Systems and methods for jointly training a machine-learning-based monocular optical flow, depth, and scene flow estimator | |
CN109005398B (en) | A Disparity Matching Method of Stereo Image Based on Convolutional Neural Network | |
Mascaro et al. | Diffuser: Multi-view 2d-to-3d label diffusion for semantic scene segmentation | |
JP6946255B2 (en) | Learning device, estimation device, learning method and program | |
CN110335299B (en) | An Implementation Method of Monocular Depth Estimation System Based on Adversarial Network | |
CN104517317A (en) | Three-dimensional reconstruction method of vehicle-borne infrared images | |
CN111354030A (en) | Method for generating unsupervised monocular image depth map embedded into SENET unit | |
CN104182968A (en) | Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system | |
CN114445480A (en) | Transformer-based thermal infrared image stereo matching method and device | |
Hirner et al. | FC-DCNN: A densely connected neural network for stereo estimation | |
CN107274448B (en) | Variable weight cost aggregation stereo matching algorithm based on horizontal tree structure | |
CN109816710B (en) | Parallax calculation method for binocular vision system with high precision and no smear | |
CN111476075A (en) | Object detection method and device based on CNN (convolutional neural network) by utilizing 1x1 convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170531 |
|
RJ01 | Rejection of invention patent application after publication |