[go: up one dir, main page]

CN112150526A - A depth estimation method for light field images based on deep learning - Google Patents

A depth estimation method for light field images based on deep learning Download PDF

Info

Publication number
CN112150526A
CN112150526A CN202010733319.0A CN202010733319A CN112150526A CN 112150526 A CN112150526 A CN 112150526A CN 202010733319 A CN202010733319 A CN 202010733319A CN 112150526 A CN112150526 A CN 112150526A
Authority
CN
China
Prior art keywords
image
light field
depth
depth map
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010733319.0A
Other languages
Chinese (zh)
Inventor
郑臻荣
王旭成
陶骁
陶陈凝
吴仍茂
孙鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010733319.0A priority Critical patent/CN112150526A/en
Publication of CN112150526A publication Critical patent/CN112150526A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于深度学习的光场图像深度估计方法,包括如下步骤:根据光场相机的参数信息,解码重构光场源文件,提取子孔径图像阵列;将子孔径图像输入训练好的神经网络,计算得到估计的深度图;对估计的深度图进行滤波优化,得到最终估计的深度图。本发明在神经网络的基础上结合了极平面图像分析和图像分割的方法,同时利用了深度特征和图像的边缘信息,改善了对实际光场图像进行深度估计过程中存在的误匹配问题,能够对合成和实际光场图像进行快速准确的深度估计。

Figure 202010733319

The invention discloses a light field image depth estimation method based on deep learning, comprising the following steps: decoding and reconstructing light field source files according to parameter information of a light field camera, extracting a sub-aperture image array; The estimated depth map is obtained by calculating the neural network; the estimated depth map is filtered and optimized to obtain the final estimated depth map. The invention combines the method of polar plane image analysis and image segmentation on the basis of neural network, and utilizes the depth feature and edge information of the image at the same time, so as to improve the problem of mismatching in the process of depth estimation for the actual light field image. Fast and accurate depth estimation for synthetic and real light field images.

Figure 202010733319

Description

Light field image depth estimation method based on deep learning
Technical Field
The invention relates to the technical field of computer vision and digital image processing, in particular to a light field image depth estimation method based on deep learning.
Background
In recent years, with the development of light field calculation imaging technology, a light field camera has entered the market as a light field acquisition device. On the basis of a traditional camera model, a micro-lens array is inserted between a main lens and a sensor of the light field camera, and the special structure can enable the light field camera to simultaneously record the position information and the angle information of all light rays reaching an imaging surface in one exposure, and can realize the applications of depth estimation, scene refocusing and the like in subsequent processing.
At present, some depth estimation methods based on the light field are proposed and achieve better effect, and the methods mainly include three types: a sub-aperture image matching based method, a polar plane image based method, and a deep learning based method. For example, the depth estimation method for the light field image provided by the publication number CN108596965A utilizes depth estimation guided by light field structure characteristics to calculate a depth map of a color image of a central viewpoint of the light field image in consideration of the occlusion problem; utilizing gradient information of the depth map as an energy function smoothing item for optimizing global depth in a Markov random field framework; and calculating the parallax between other viewpoints and the central viewpoint at the same horizontal position with the central viewpoint image by adopting multi-scale multi-window stereo matching. However, due to the huge calculation amount of light field data, the method has inevitable trade-off relation between calculation time and accuracy. As a polar plane image-based method provided under publication No. CN 107545586 a, the depth value of the corresponding region is obtained by calculating the slope of a straight line in a polar plane image obtained from light field data. However, due to the geometric characteristics of the polar plane image, this type of method has certain limitations in scenes with occluded or reflected regions.
With the development of deep learning, convolutional neural networks are beginning to be used to study the depth estimation problem of light field images. Most of the existing methods use a synthetic light field data set for training and testing, and achieve better effect. However, compared with the actual light field image shot by the light field camera, the actual light field image has a narrower baseline and contains a large amount of noise interference, so that the method has poor effect when applied to the actual light field image, and the feasibility of the application of the light field image depth estimation method in the actual scene is greatly restricted.
Disclosure of Invention
The invention provides a light field image depth estimation method based on deep learning, which combines a polar plane image and an image segmentation network, designs a neural network model aiming at light field image depth estimation, and realizes rapid and accurate depth estimation of a synthetic light field image and an actual light field image.
In order to realize the purpose of the invention, the following technical scheme is adopted:
a light field image depth estimation method based on deep learning comprises the following steps:
(1) decoding a reconstructed light field source file according to the parameter information of the light field camera, and extracting a sub-aperture image array;
(2) inputting the sub-aperture image into a trained neural network for calculation to obtain a depth map of secondary estimation;
the neural network comprises:
a polar plane image portion for extracting an initial estimated depth map from the sub-aperture image;
an image segmentation section for extracting edge information of the image from the sub-aperture image;
the cascade part is used for carrying out convolution according to the depth map of the initial estimation and the edge information to obtain a depth map of the secondary estimation;
(3) and carrying out median filtering on the depth map subjected to secondary estimation, and removing part of noise to obtain the final estimated depth map.
Optionally, in step (1), the parameter information of the light field camera is acquired by processing a white image captured by the camera;
and decoding the light field source file, and obtaining the required sub-aperture image array after filtering processing and color correction.
Several alternatives are provided below, but not as an additional limitation to the above general solution, but merely as a further addition or preference, each alternative being combinable individually for the above general solution or among several alternatives without technical or logical contradictions.
Optionally, the shape of the sub-aperture image is adjusted to be square and then input into the neural network.
Optionally, the polar plane image portion is composed of a multi-stream network and a merging network.
Optionally, the multi-stream network inputs a 9 × 9 sub-aperture image as a center, polar plane images in four directions of 0 °, 45 °, 90 °, and 135 ° are extracted from the image, and a defined convolution module is used to perform convolution respectively, so as to extract depth features of a scene.
Optionally, the merging network is connected to the output of the multi-stream network, and performs convolution on the merged network to calculate a relationship between depth features of polar plane images in different directions, so as to obtain an initially estimated depth map.
Optionally, the polar plane image portion and the cascade portion use a small convolution kernel of 3 × 3, and the step size is 1; the same filling is used in the convolution process, and the size of the output depth map is kept consistent with that of the input sub-aperture image.
Optionally, the input of the image segmentation part is a central sub-aperture image, and a convolution layer, a pooling layer and a deconvolution layer are used to extract edge information of the image.
Optionally, the neural network uses a light field data set including a real depth map as a training set, and performs training by using a method of randomly sampling a gray patch, and an average absolute error is used as a loss function, which is defined as follows:
Figure BDA0002604104020000031
wherein, L is a loss function, W is a weight matrix, b is a bias coefficient, T is the number of training patches, H is a forward propagation function of the network, g is an input 9 multiplied by 9 light field sub-aperture image, and d is a gray scale patch block of a real depth map;
and reducing the value of the loss function through iterative training, reducing the difference of the gray values between the finally estimated depth map and the real depth map until the training is judged to be saturated, ending the training and storing the trained neural network parameters.
Optionally, the neural network performs data addition processing on the training set before training, including rotation, inversion, gamma transformation, and processing of adding random noise, so as to avoid overfitting and improve generalization capability of the network.
The invention has the advantages that the depth estimation can be rapidly and accurately carried out on the light field image to obtain a high-precision depth map. The neural network obtains the depth characteristics of a scene through the polar plane image part and combines the image edge information obtained by the image segmentation part, so that the problem of mismatching in depth estimation of an actual light field image can be solved, bad pixel points in an estimated depth image are reduced, and the accuracy of depth estimation is improved. The whole process utilizes the high calculation power of the GPU, greatly improves the operation speed compared with the traditional algorithm, and meets the requirements of practical application.
Drawings
FIG. 1 is a flow chart of a light field image depth estimation method based on deep learning according to the present invention;
FIG. 2 is a schematic diagram of the structure of a neural network according to the present invention;
FIG. 3 is a schematic diagram of the structure of the convolution module of the present invention;
FIG. 4 is a schematic diagram of the structure of a pooling module of the present invention;
FIG. 5 is a schematic diagram illustrating an example comparison of the depth estimation results of the present invention with the prior art method.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and thus the present invention is not limited to the specific embodiments disclosed below.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Fig. 1 is a schematic flow chart of a light field image depth estimation method based on deep learning in this embodiment, and includes the following steps:
step 1, decoding the reconstructed light field image according to the parameter information of the light field camera, and extracting a sub-aperture image array.
As described in step 1, the light field raw image captured by a light field camera (e.g., Lytro, etc.) is typically a 12-bit Bayer format image, which needs to be decoded into a sub-aperture image format for a subsequent depth estimation process. The decoding process can use a light field toolbox to process a white image shot by a camera so as to obtain parameter information of the light field camera, then the light field source file is decoded, and a required sub-aperture image array is obtained after filtering processing and color correction.
And 2, inputting the sub-aperture image into a trained neural network for calculation to obtain a depth map of secondary estimation.
Due to the special structure of the microlenses in the light field camera, the resulting sub-aperture image is generally rectangular in shape with unequal length and width, as described in step 2, and in order to extract four-directional polar plane images from the sub-aperture image in subsequent operations, the shape of the sub-aperture image needs to be adjusted to be square before being input into the neural network.
And 3, performing median filtering on the secondarily estimated depth map, and removing partial noise to obtain the finally estimated depth map.
Fig. 2 is a schematic structural diagram of the neural network according to the present invention, which includes a polar plane image portion 100, an image segmentation portion 200, and a cascade portion 300.
The polar plane image portion 100 is composed of a multi-stream network 110 and a merging network 120 for extracting an initial estimated depth map. The input of the multi-stream network 110 is a 9 × 9 sub-aperture image with the center, polar plane images in four directions of 0 °, 45 °, 90 ° and 135 ° are extracted from the image, and are respectively convolved by using a defined convolution module, so as to extract depth features of a scene. The combining network 120 connects the outputs of the multiflow network 110, and then performs convolution using 9 well-defined convolution modules to calculate the relationship of depth features between polar plane images in different directions, so as to obtain an initial estimated depth map.
The image segmentation part 200 uses a full convolution neural network (FCN) structure, and after the sub-aperture image at the center is input, pooling is performed by 3 defined pooling modules, and then convolution-deconvolution operations are performed on the output of each pooling module for different times, and then the output of each pooling module is combined, so that high-dimensional contour information is combined with low-dimensional fine information, and edge information of the image is extracted.
The cascade portion 300 connects the outputs of the polar plane image portion 100 and the image segmentation portion 200, combines the depth information of the image with the edge information of the image, and performs a "convolution-Relu-convolution" process on the image to obtain a depth map of the quadratic estimation.
In another preferred embodiment, the neural network uses a small convolution kernel of 3 × 3 in the polar plane image portion 100 and the cascade portion 300, with a step size of 1, for more accurately measuring minute disparities in the baseline-narrowed light field image; the same padding is used in the convolution process to keep the output depth map size consistent with the input sub-aperture image size.
In another preferred embodiment, the neural network uses a light field data set containing a real depth map as a training set and a randomly sampled gray patch method for training, with the average absolute error as a loss function, which is defined as follows:
Figure BDA0002604104020000061
wherein, L is a loss function, W is a weight matrix, b is a bias coefficient, T is the number of training patches, H is a forward propagation function of the network, g is an input 9 multiplied by 9 light field sub-aperture image, and d is a gray scale patch block of a real depth map; and reducing the value of the loss function through iterative training, reducing the gray value difference between the finally estimated depth map and the real depth map until the network parameters are updated less, and obtaining a better test result through repeated iterative training, namely judging that the training tends to be saturated, finishing the training and storing the trained neural network parameters.
In another preferred embodiment, the neural network performs data addition processing on the training set before training, including rotation, flipping, gamma transformation, and addition of random noise, to avoid overfitting and improve the generalization capability of the network.
In another preferred embodiment, the randomly sampled gray patch size of the neural network is 64 × 64, and the optimization method used is Adam optimizer.
Fig. 3 is a schematic structural diagram of the convolution module according to the present invention. The structure of the convolution module used in the present invention is defined as "convolution layer-Relu layer-convolution layer-batch normalization-Relu layer" for convolution calculation of the polar plane image portion. Wherein, Relu layer is used as an activation function for introducing a nonlinear factor; the batch normalization layer is used to speed up convergence and link overfitting.
Fig. 4 is a schematic diagram of the structure of the pooling module of the present invention. The structure of the pooling module used in the invention is defined as 'convolution layer-Relu layer-convolution layer-batch standardization-Relu layer-pooling layer', and is used for downsampling the sub-aperture image and extracting low-dimensional information.
In this embodiment, two typical light field image depth estimation methods are compared with the present invention, one is an LF _ OCC method: wang, published in ICCV, was proposed in 2015; one is the EPINET method: shin was proposed in 2018 and published in CVPR.
This example uses the Lytro Illum light field dataset provided by Daudt et al to test the performance of the invention on real scene light field data. The dataset comprised 36 sets of Lytro Illum camera data. Fig. 5 shows depth estimation results of 3 typical scenes, the first column is the central sub-aperture image of the scene, and the second to fourth columns are the result of the LF _ OCC method, the result of the EPINET method, and the result of the present invention, respectively; the top two rows show two sets of outdoor scenes, and the bottom row is an indoor scene.
It is obvious from the analysis of the embodiment that the depth information can be well estimated for indoor and outdoor noise scenes.
The light field image depth estimation method based on deep learning uses an HCI light field data set for testing, and can achieve the following performance parameters: the average dead pixel rate is 8.201% (when the threshold value is 0.07), the average mean square error is 3.020%, and the average calculation time is 0.415 seconds, so that the fast high-precision depth estimation of the synthesized and actual light field images is satisfied.
The method combines the polar plane image analysis and the image segmentation method on the basis of the neural network, simultaneously utilizes the depth characteristics and the edge information of the image, improves the problem of mismatching in the process of depth estimation of the actual light field image, reduces bad pixel points in the estimated depth image, and improves the accuracy of depth estimation. The whole process utilizes the high calculation power of the GPU, can quickly obtain a high-precision depth map, and meets the requirements of practical application.
The above description is only exemplary of the preferred embodiments of the present invention, and is not intended to limit the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A light field image depth estimation method based on deep learning is characterized by comprising the following steps:
(1) decoding a reconstructed light field source file according to the parameter information of the light field camera, and extracting a sub-aperture image array;
(2) inputting the sub-aperture image into a trained neural network for calculation to obtain a depth map of secondary estimation;
the neural network comprises:
a polar plane image portion for extracting an initial estimated depth map from the sub-aperture image;
an image segmentation section for extracting edge information of the image from the sub-aperture image;
the cascade part is used for carrying out convolution according to the depth map of the initial estimation and the edge information to obtain a depth map of the secondary estimation;
(3) and carrying out median filtering on the depth map subjected to secondary estimation, and removing part of noise to obtain the final estimated depth map.
2. The deep learning-based light field image depth estimation method according to claim 1, wherein in step (1), the parameter information of the light field camera is acquired by processing a white image taken by the camera;
and decoding the light field source file, and obtaining the required sub-aperture image array after filtering processing and color correction.
3. The deep learning-based light field image depth estimation method according to claim 1, wherein the shape of the sub-aperture image is adjusted to be square and then input to the neural network.
4. The depth-learning-based light field image depth estimation method according to claim 1, wherein the polar plane image portion is composed of a multi-stream network and a merging network.
5. The depth-learning-based light field image depth estimation method according to claim 4, wherein the input of the multi-stream network is a central 9 × 9 sub-aperture image, polar plane images in four directions of 0 °, 45 °, 90 ° and 135 ° are extracted from the image, and are convolved by using well-defined convolution modules respectively, so as to extract depth features of a scene.
6. The depth-learning-based light field image depth estimation method according to claim 4, wherein the merging network is connected to an output of the multi-stream network, and is convolved to calculate a relationship of depth features between polar plane images in different directions, so as to obtain an initially estimated depth map.
7. The depth-learning-based light field image depth estimation method according to claim 4, wherein the polar plane image portion and the cascade portion employ a small convolution kernel of 3 x 3 with a step size of 1; the same filling is used in the convolution process, and the size of the output depth map is kept consistent with that of the input sub-aperture image.
8. The deep learning-based light field image depth estimation method according to claim 1, wherein the input of the image segmentation part is a central sub-aperture image, and a convolutional layer, a pooling layer and a deconvolution layer are used for extracting edge information of the image.
9. The deep learning-based light field image depth estimation method according to claim 1, wherein the neural network uses a light field data set containing a real depth map as a training set and adopts a random sampling gray patch method for training, and an average absolute error is used as a loss function, which is defined as follows:
Figure FDA0002604104010000021
wherein, L is a loss function, W is a weight matrix, b is a bias coefficient, T is the number of training patches, H is a forward propagation function of the network, g is an input 9 multiplied by 9 light field sub-aperture image, and d is a gray scale patch block of a real depth map;
and reducing the value of the loss function through iterative training, reducing the difference of the gray values between the finally estimated depth map and the real depth map until the training is judged to be saturated, ending the training and storing the trained neural network parameters.
10. The depth-learning-based light field image depth estimation method according to claim 9, wherein the neural network performs data addition processing on the training set before training, including rotation, inversion, gamma transformation, and random noise addition processing, so as to avoid overfitting and improve generalization capability of the network.
CN202010733319.0A 2020-07-27 2020-07-27 A depth estimation method for light field images based on deep learning Pending CN112150526A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010733319.0A CN112150526A (en) 2020-07-27 2020-07-27 A depth estimation method for light field images based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010733319.0A CN112150526A (en) 2020-07-27 2020-07-27 A depth estimation method for light field images based on deep learning

Publications (1)

Publication Number Publication Date
CN112150526A true CN112150526A (en) 2020-12-29

Family

ID=73887740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010733319.0A Pending CN112150526A (en) 2020-07-27 2020-07-27 A depth estimation method for light field images based on deep learning

Country Status (1)

Country Link
CN (1) CN112150526A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767466A (en) * 2021-01-20 2021-05-07 大连理工大学 Light field depth estimation method based on multi-mode information
CN114359361A (en) * 2021-12-28 2022-04-15 Oppo广东移动通信有限公司 Depth estimation method, depth estimation device, electronic equipment and computer-readable storage medium
CN114511605A (en) * 2022-04-18 2022-05-17 清华大学 Light field depth estimation method, device, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993260A (en) * 2017-12-14 2018-05-04 浙江工商大学 A kind of light field image depth estimation method based on mixed type convolutional neural networks
CN108389171A (en) * 2018-03-08 2018-08-10 深圳市唯特视科技有限公司 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
US20180231871A1 (en) * 2016-06-27 2018-08-16 Zhejiang Gongshang University Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
CN110276795A (en) * 2019-06-24 2019-09-24 大连理工大学 Light field depth estimation method based on splitting iterative algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180231871A1 (en) * 2016-06-27 2018-08-16 Zhejiang Gongshang University Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
CN107993260A (en) * 2017-12-14 2018-05-04 浙江工商大学 A kind of light field image depth estimation method based on mixed type convolutional neural networks
CN108389171A (en) * 2018-03-08 2018-08-10 深圳市唯特视科技有限公司 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
CN110276795A (en) * 2019-06-24 2019-09-24 大连理工大学 Light field depth estimation method based on splitting iterative algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XUCHENG WANG 等: "Light-field-depth-estimation network based on epipolar geometry and image segmentation", 《JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767466A (en) * 2021-01-20 2021-05-07 大连理工大学 Light field depth estimation method based on multi-mode information
CN112767466B (en) * 2021-01-20 2022-10-11 大连理工大学 A light field depth estimation method based on multimodal information
CN114359361A (en) * 2021-12-28 2022-04-15 Oppo广东移动通信有限公司 Depth estimation method, depth estimation device, electronic equipment and computer-readable storage medium
CN114511605A (en) * 2022-04-18 2022-05-17 清华大学 Light field depth estimation method, device, electronic device and storage medium
WO2023201783A1 (en) * 2022-04-18 2023-10-26 清华大学 Light field depth estimation method and apparatus, and electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN104217404B (en) Haze sky video image clearness processing method and its device
CN109255831B (en) A method for single-view face 3D reconstruction and texture generation based on multi-task learning
Cheng et al. Depth estimation via affinity learned with convolutional spatial propagation network
Tang et al. Single image dehazing via lightweight multi-scale networks
US10353271B2 (en) Depth estimation method for monocular image based on multi-scale CNN and continuous CRF
CN113139898B (en) Super-resolution reconstruction method of light field image based on frequency domain analysis and deep learning
Jiang et al. Alignerf: High-fidelity neural radiance fields via alignment-aware training
CN108564549B (en) Image defogging method based on multi-scale dense connection network
CN112150526A (en) A depth estimation method for light field images based on deep learning
CN106683139A (en) Fisheye-camera calibration system based on genetic algorithm and image distortion correction method thereof
CN114266939B (en) A Brain Extraction Method Based on ResTLU-Net Model
Li et al. Epi-based oriented relation networks for light field depth estimation
CN113962878B (en) Low-visibility image defogging model method
CN108564620A (en) A Scene Depth Estimation Method for Light Field Array Camera
CN113222879B (en) A Generative Adversarial Network for Fusion of Infrared and Visible Light Images
CN115830406A (en) Rapid light field depth estimation method based on multiple parallax scales
CN111553856A (en) Image defogging method based on depth estimation assistance
CN116363036A (en) Infrared and visible light image fusion method based on visual enhancement
CN107025637B (en) Photon counting integration imaging iterative reconstruction method based on Bayesian Estimation
CN110751271B (en) Image traceability feature characterization method based on deep neural network
CN113936022A (en) Image defogging method based on multi-modal characteristics and polarization attention
Lin et al. Transformer-Based Light Field Geometry Learning for No-Reference Light Field Image Quality Assessment
Feng et al. Specular highlight removal of light field image combining dichromatic reflection with exemplar patch filling
CN118135017A (en) Polarization synchronous positioning and mapping method for integrated double-branch superdivision network
CN108090920A (en) A kind of new light field image deep stream method of estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229