CN114529573A - Reflection separation method and system of light field image - Google Patents
Reflection separation method and system of light field image Download PDFInfo
- Publication number
- CN114529573A CN114529573A CN202210089918.2A CN202210089918A CN114529573A CN 114529573 A CN114529573 A CN 114529573A CN 202210089918 A CN202210089918 A CN 202210089918A CN 114529573 A CN114529573 A CN 114529573A
- Authority
- CN
- China
- Prior art keywords
- reflection
- background
- image
- feature
- spatial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000926 separation method Methods 0.000 title claims abstract description 45
- 230000007246 mechanism Effects 0.000 claims abstract description 14
- 238000011176 pooling Methods 0.000 claims description 38
- 230000008859 change Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 6
- 238000000034 method Methods 0.000 abstract description 6
- 238000004451 qualitative analysis Methods 0.000 abstract description 3
- 238000004445 quantitative analysis Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000011840 criminal investigation Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
本发明提供了一种光场图像的反射分离方法及系统,用于解决现有技术光场图像反射分离不准的问题。所述方法基于光场图像获取多尺度空间角度特征;再进行重聚焦操作获取重聚焦图像后,分别生成初步背景图像和初步反射图像,并进一步卷积,获得背景边缘图、背景重聚焦图像、反射边缘图、反射重聚焦图像;再基于空间注意力机制图生成背景和反射空间注意力权重;合并多尺度空间角度特征和重聚焦特征,获取背景重构特征和反射重构特征,进一步生成反射通道注意力权重,根据两个权重调整重构特征,最后分别利用U型网络重构分离后的背景图像和反射图像。本发明通过带反射的光场图像,自动恢复出对应的背景图像和反射图像,提高图像的定性和定量分析效果。
The invention provides a reflection separation method and system for light field images, which are used to solve the problem of inaccurate reflection separation of light field images in the prior art. The method obtains multi-scale spatial angle features based on the light field image; after the refocusing operation is performed to obtain the refocusing image, a preliminary background image and a preliminary reflection image are respectively generated, and further convolved to obtain a background edge map, a background refocusing image, Reflection edge map, reflection refocusing image; then generate background and reflection spatial attention weights based on spatial attention mechanism map; combine multi-scale spatial angle features and refocusing features to obtain background reconstruction features and reflection reconstruction features, and further generate reflections Channel attention weights, adjust the reconstructed features according to the two weights, and finally use the U-shaped network to reconstruct the separated background image and reflection image respectively. The invention automatically recovers the corresponding background image and the reflection image through the light field image with reflection, and improves the qualitative and quantitative analysis effect of the image.
Description
技术领域technical field
本发明属于图像处理领域,具体涉及一种光场图像的反射分离方法和系统。The invention belongs to the field of image processing, and in particular relates to a reflection separation method and system for light field images.
背景技术Background technique
光场相机,在主镜头和成像传感器之间增加了微透镜阵列。微透镜阵列是一个由许多微透镜单元组成的二维阵列,其与主镜头、成像传感器成共轭关系,因此一束光的位置和方向可以被微透镜平面和成像传感器平面确定,从而使得光场相机能够同时记录光的强度、位置和方向。与传统的相机相比具有先拍照、再对焦的特点,在多视角信息采集方面具有巨大的优势。A light field camera with the addition of a microlens array between the main lens and the imaging sensor. The microlens array is a two-dimensional array composed of many microlens units, which is in a conjugate relationship with the main lens and the imaging sensor, so the position and direction of a beam of light can be determined by the microlens plane and the imaging sensor plane, so that the light Field cameras are capable of simultaneously recording the intensity, position and direction of light. Compared with traditional cameras, it has the characteristics of taking pictures first and then focusing, and has huge advantages in multi-view information collection.
反射现象是一种常见的图像干扰,当透过玻璃等透明平面拍摄照片时,相机这一侧的物体也会被摄入,不仅在视觉上不美观,更会影响一些其他计算机视觉任务的性能,例如:目标检测和目标跟踪。光场相机所拍摄的光场图像同样存在这样的问题。因此,一般需要通过分离方法对图像进行分离,以获得实际场景图像。Reflection is a common image disturbance. When a photo is taken through a transparent plane such as glass, objects on the camera side will also be ingested, which is not only visually unsightly, but also affects the performance of some other computer vision tasks. , for example: object detection and object tracking. The light field image captured by the light field camera also has the same problem. Therefore, it is generally necessary to separate the image by a separation method to obtain the actual scene image.
现有的技术中,单图像的反射分离方法通常需要假设反射层比较微弱或者假设反射层比较模糊,这些假设严重限制了这些方法在实际复杂场景下的使用;同时,基于假设条件的反射分离方法往往以分钟为单位进行计算,处理速度慢。In the prior art, the reflection separation method of a single image usually needs to assume that the reflection layer is relatively weak or that the reflection layer is relatively blurred. These assumptions severely limit the use of these methods in actual complex scenes; at the same time, the reflection separation method based on assumptions The calculation is often performed in minutes, and the processing speed is slow.
发明内容SUMMARY OF THE INVENTION
本发明实施例的目的是提出一种光场图像的反射分离方法,通过多尺度空间角度卷积获取多尺度空间角度特征,通过重聚焦操作获取的重聚焦图像获取初步的背景图像和反射图像,进一步,生成背景边缘图、背景重聚焦特征、反射边缘图、反射重聚焦特征,再通过背景边缘图、背景重聚焦特征生成背景空间注意力权重和背景通道注意力权重,并调整背景重构特征;通过反射边缘图、反射重聚焦特征生成反射空间注意力权重和反射通道注意力权重,并调整反射重构特征,根据调整后的背景重构特征和反射重构特征分别重构背景图像和反射图像,通过带反射的光场图像,自动恢复出对应的背景图像和反射图像,提高图像的定性和定量分析效果。The purpose of the embodiments of the present invention is to propose a reflection separation method for light field images, obtaining multi-scale space angle features through multi-scale space angle convolution, obtaining preliminary background images and reflection images through refocusing images obtained by refocusing operations, Further, generate the background edge map, background refocusing feature, reflection edge map, and reflection refocusing feature, and then generate the background spatial attention weight and background channel attention weight through the background edge map and background refocusing feature, and adjust the background reconstruction feature. ; Generate reflective spatial attention weights and reflective channel attention weights through reflective edge maps and reflective refocusing features, adjust reflective reconstruction features, and reconstruct background images and reflections respectively according to the adjusted background reconstruction features and reflective reconstruction features Image, through the light field image with reflection, the corresponding background image and reflection image are automatically recovered, which improves the qualitative and quantitative analysis effect of the image.
为了实现上述目的,本发明所采用的技术方案如下:In order to achieve the above object, the technical scheme adopted in the present invention is as follows:
第一方面,本发明实施例提供了一种光场图像的反射分离方法,所述反射分离方法包括如下步骤:In a first aspect, an embodiment of the present invention provides a reflection separation method for a light field image, where the reflection separation method includes the following steps:
步骤S1,获取原始光场图像,进行多尺度空间角度卷积,获取多尺度空间角度特征;Step S1, obtain the original light field image, perform multi-scale spatial angle convolution, and obtain multi-scale spatial angle features;
步骤S2,通过对原始的光场图像进行重聚焦操作,获取重聚焦图像;Step S2, obtaining a refocusing image by performing a refocusing operation on the original light field image;
步骤S3,对所述的重聚焦图像进行动态卷积操作,分别生成初步背景图像和初步反射图像;Step S3, performing a dynamic convolution operation on the refocusing image to generate a preliminary background image and a preliminary reflection image respectively;
步骤S4,分别对所述的初步背景图像和初步反射图像进行卷积,获得背景边缘图、背景重聚焦特征、反射边缘图、反射重聚焦特征;Step S4, respectively convolving the preliminary background image and the preliminary reflection image to obtain a background edge map, a background refocusing feature, a reflection edge map, and a reflection refocusing feature;
步骤S5,基于空间注意力机制,利用所述背景边缘图生成背景空间注意力权重,利用所述反射边缘图生成反射空间注意力权重;Step S5, based on the spatial attention mechanism, using the background edge map to generate background spatial attention weights, and using the reflected edge map to generate reflective spatial attention weights;
步骤S6,合并所述的多尺度空间角度特征和背景重聚焦特征,获取背景重构特征,合并所述的多尺度空间角度特征和反射重聚焦特征,获取反射重构特征;Step S6, merging the multi-scale space angle feature and the background refocusing feature to obtain the background reconstruction feature, and merging the multi-scale space angle feature and the reflection refocusing feature to obtain the reflection reconstruction feature;
步骤S7,基于通道注意力机制,利用所述的背景重构特征生成背景通道注意力权重,利用所述的反射重构特征生成反射通道注意力权重;Step S7, based on the channel attention mechanism, utilize the background reconstruction feature to generate the background channel attention weight, and utilize the reflection reconstruction feature to generate the reflection channel attention weight;
步骤S8,利用所述的背景空间注意力权重和背景通道注意力权重调整背景重构特征,获取调整后的背景重构特征;利用所述的反射空间注意力权重和反射通道注意力权重调整反射重构特征,获取调整后的反射重构特征;Step S8, utilize the background space attention weight and the background channel attention weight to adjust the background reconstruction feature, and obtain the adjusted background reconstruction feature; use the reflection space attention weight and the reflection channel attention weight to adjust the reflection Reconstruction features to obtain adjusted reflection reconstruction features;
步骤S9,利用U型网络从调整后的背景重构特征重构背景图像,利用U型网络从调整后的反射重构特征重构反射图像。In step S9, the background image is reconstructed from the adjusted background reconstruction feature by using a U-shaped network, and the reflection image is reconstructed from the adjusted reflection reconstruction feature by using the U-shaped network.
作为本发明的一个优选实施例,所述步骤S1多尺度空间角度卷积,具体包含如下步骤:As a preferred embodiment of the present invention, the step S1 multi-scale spatial angle convolution specifically includes the following steps:
利用空间2D卷积进行下采样操作,分别使用1次和2次,获取1/2原始分辨率和1/4原始分辨率的图像;Use spatial 2D convolution to perform downsampling operations, using 1 and 2 times respectively, to obtain images of 1/2 original resolution and 1/4 original resolution;
将原始、1/2原始分辨率和1/4原始分辨率图像分别进行4次空间2D卷积和4次角度2D卷积,获取不同尺度的特征;Perform 4 spatial 2D convolutions and 4 angular 2D convolutions on the original, 1/2 original resolution and 1/4 original resolution images respectively to obtain features of different scales;
将所述不同尺度的特征利用空间2D卷积进行融合,获取多尺度空间角度特征。The features of different scales are fused by spatial 2D convolution to obtain multi-scale spatial angle features.
作为本发明的一个优选实施例,所述步骤S2重聚焦操作的公式为:As a preferred embodiment of the present invention, the formula for the refocusing operation in step S2 is:
式(1)中,d表示重聚焦操作的数值,L表示光场图像,u,v表示角度维,x,y 表示空间维,uc,vc表示中心视角的角度坐标,u∈[0,U],v∈[0,V],U和V是两个常数,Id(x,y)表示使用数值进行重聚焦操作后的重聚焦图像。In formula (1), d represents the value of the refocusing operation, L represents the light field image, u, v represent the angular dimension, x, y represent the spatial dimension, u c , v c represent the angular coordinates of the central viewing angle, u∈[0 ,U], v∈[0,V], U and V are two constants, and I d (x, y) represents the refocusing image after the refocusing operation using numerical values.
作为本发明的一个优选实施例,所述步骤S3动态卷积,具体包含如下步骤:As a preferred embodiment of the present invention, the step S3 dynamic convolution specifically includes the following steps:
将重聚焦图像输入2组不同的3D卷积中,分别生成背景动态卷积核和反射动态卷积核;Input the refocused image into 2 different 3D convolutions to generate background dynamic convolution kernel and reflection dynamic convolution kernel respectively;
利用背景动态卷积核从所述重聚焦图像中生成初步背景图像;generating a preliminary background image from the refocused image using a background dynamic convolution kernel;
利用反射动态卷积核从所述重聚焦图像中生成初步反射图像。A preliminary reflection image is generated from the refocused image using a reflection dynamic convolution kernel.
作为本发明的一个优选实施例,所述步骤S3利用动态卷积生成初步的背景图像和初步的反射图像的操作,可以用如下公式表示:As a preferred embodiment of the present invention, the step S3 utilizes dynamic convolution to generate the preliminary background image and the preliminary reflection image, which can be expressed by the following formula:
Binter(x,y)=∑dI(d,x,y)*WB(d,x,y) (2)B inter (x,y)=∑ d I(d,x,y)*W B (d,x,y) (2)
Rinter(x,y)=∑dI(d,x,y)*WR(d,x,y) (3)R inter (x,y)=∑ d I(d,x,y)*W R (d,x,y) (3)
式(2)、(3)中,d表示重聚焦操作的数值,x,y表示空间维,I(d,x,y)表示所有重聚焦图像,Binter(x,y)表示初步的背景图像,WB(d,x,y)表示背景动态卷积核,Rinter(x,y)表示初步的反射图像,WR(d,x,y)表示反射动态卷积核。In equations (2) and (3), d represents the value of the refocusing operation, x, y represent the spatial dimension, I(d, x, y) represents all refocusing images, and B inter (x, y) represents the preliminary background Image, WB (d,x,y) represents the background dynamic convolution kernel, R inter (x,y) represents the preliminary reflection image, and W R ( d,x,y) represents the reflection dynamic convolution kernel.
作为本发明的一个优选实施例,所述步骤S5基于空间注意力机制,利用所述背景边缘图生成背景空间注意力权重,利用所述反射边缘图生成反射空间注意力权重,具体包含如下步骤:As a preferred embodiment of the present invention, the step S5 is based on the spatial attention mechanism, using the background edge map to generate the background spatial attention weight, and using the reflective edge map to generate the reflective spatial attention weight, which specifically includes the following steps:
将背景边缘图输入空间2D卷积中,将其通道为变成1,获得背景空间注意力权重;Input the background edge map into the spatial 2D convolution, change its channel to 1, and obtain the background spatial attention weight;
将反射边缘图输入空间2D卷积中,将其通道为变成1,获得反射空间注意力权重。The reflection edge map is input into the spatial 2D convolution, and its channel is changed to 1 to obtain the reflection spatial attention weight.
作为本发明的一个优选实施例,所述步骤S7基于通道注意力机制,利用所述的背景重构特征生成背景通道注意力权重,利用所述的反射重构特征生成反射通道注意力权重,具体包含如下步骤:As a preferred embodiment of the present invention, the step S7 is based on the channel attention mechanism, using the background reconstruction feature to generate the background channel attention weight, and using the reflection reconstruction feature to generate the reflection channel attention weight, specifically Contains the following steps:
将背景重构特征进行全局最大池化操作和全局平均池化操作,将其空间分辨率变为1*1,获取背景最大池化特征和背景平均池化特征;Perform global maximum pooling and global average pooling operations on the background reconstruction feature, change its spatial resolution to 1*1, and obtain background maximum pooling features and background average pooling features;
将所述的背景最大池化特征和背景平均池化特征输入全连接层,获得背景通道注意力权重;Input the background maximum pooling feature and background average pooling feature into the fully connected layer to obtain the background channel attention weight;
将反射重构特征进行全局最大池化操作和全局平均池化操作,将其空间分辨率变为1*1,获取反射最大池化特征和反射平均池化特征;Perform global maximum pooling and global average pooling operations on the reflection reconstruction feature, change its spatial resolution to 1*1, and obtain reflection maximum pooling features and reflection average pooling features;
将所述的反射最大池化特征和反射平均池化特征输入全连接层,获得反射通道注意力权重。Input the reflection max pooling feature and reflection average pooling feature into the fully connected layer to obtain the reflection channel attention weight.
作为本发明的一个优选实施例,所述步骤S8获取调整后的背景重构特征和反射重构特征,通过如下公式获取:As a preferred embodiment of the present invention, the step S8 obtains the adjusted background reconstruction features and reflection reconstruction features, and obtains them by the following formula:
式(4)、(5)中,x,y表示空间维,CAB(x,y)表示背景通道注意力权重, SAB(x,y)表示背景空间注意力权重,FB(x,y)表示背景重构特征,表示调整后的背景重构特征;CAR(x,y)表示反射通道注意力权重,SAR(x,y)表示反射空间注意力权重,FR(x,y)表示反射重构特征,表示调整后的反射重构特征。In equations (4) and (5), x, y represent the spatial dimension, CA B (x, y) represents the background channel attention weight, SA B (x, y) represents the background spatial attention weight, F B (x, y) represents the background reconstruction feature, Represents the adjusted background reconstruction features; CAR (x,y) represents the reflection channel attention weight, SAR (x,y) represents the reflection spatial attention weight, F R ( x,y) represents the reflection reconstruction feature, Represents the adjusted reflection reconstruction feature.
作为本发明的一个优选实施例,所述步骤S9中U型网络的结构具体包括编码层、解码层和跳层结构;其中,As a preferred embodiment of the present invention, the structure of the U-shaped network in the step S9 specifically includes an encoding layer, a decoding layer and a layer-hopping structure; wherein,
所述编码层由3组包含空间2D卷积的模块组成;The coding layer is composed of 3 groups of modules including spatial 2D convolution;
所述解码层由3组包含空间2D卷积和转置卷积的模块组成;The decoding layer is composed of 3 groups of modules including spatial 2D convolution and transposed convolution;
所述跳层结构用于连接编码层和解码层。The skip layer structure is used to connect the coding layer and the decoding layer.
第二方面,本发明实施例还提供了一种光场图像的反射分离系统,所述系统包括:光场图像获取模块、多尺度空间角度特征生成模块、重聚焦模块、初步分离图像生成模块、边缘图生成模块、空间注意力权重生成模块、重聚焦特征生成模块、重构特征生成模块、通道注意力权重生成模块、重构特征调整模块、图像分离模块;其中,In a second aspect, an embodiment of the present invention also provides a light field image reflection separation system, the system includes: a light field image acquisition module, a multi-scale spatial angle feature generation module, a refocusing module, a preliminary separation image generation module, Edge map generation module, spatial attention weight generation module, refocusing feature generation module, reconstruction feature generation module, channel attention weight generation module, reconstruction feature adjustment module, and image separation module; among them,
所述光场图像获取模块用于获取原始光场图像;The light field image acquisition module is used to acquire the original light field image;
所述多尺度空间角度特征生成模块用于对原始光场图像进行多尺度空间角度卷积,获取多尺度空间角度特征,并发送给重构特征生成模块;The multi-scale spatial angle feature generation module is used to perform multi-scale spatial angle convolution on the original light field image, obtain multi-scale spatial angle features, and send them to the reconstructed feature generation module;
所述重聚焦模块用于对原始的光场图像进行重聚焦操作,获取重聚焦图像;The refocusing module is used for performing a refocusing operation on the original light field image to obtain the refocusing image;
所述初步分离图像生成模块用于对重聚焦图像进行动态卷积操作,分别生成初步背景图像和初步反射图像;The preliminary separated image generation module is used to perform a dynamic convolution operation on the refocusing image to generate a preliminary background image and a preliminary reflection image respectively;
所述边缘图生成模块用于分别对初步背景图像和初步反射图像进行卷积,获得背景边缘图和反射边缘图;The edge map generation module is used to convolve the preliminary background image and the preliminary reflected image respectively to obtain the background edge map and the reflected edge map;
所述空间注意力权重生成模块用于基于空间注意力机制,利用所述背景边缘图生成背景空间注意力权重,利用所述反射边缘图生成反射空间注意力权重;The spatial attention weight generation module is configured to generate background spatial attention weights by using the background edge map based on the spatial attention mechanism, and generate reflective spatial attention weights by using the reflected edge map;
所述重聚焦特征生成模块用于分别对所述初步背景图像和初步反射图像进行卷积,获得背景重聚焦特征和反射重聚焦特征;The refocusing feature generation module is configured to convolve the preliminary background image and the preliminary reflection image respectively to obtain the background refocusing feature and the reflection refocusing feature;
所述重构特征生成模块用于合并多尺度空间角度特征和背景重聚焦特征,获取背景重构特征,合并所述的多尺度空间角度特征和反射重聚焦特征,获取反射重构特征;The reconstruction feature generation module is used for combining the multi-scale spatial angle feature and the background refocusing feature to obtain the background reconstruction feature, and combining the multi-scale spatial angle feature and the reflection refocusing feature to obtain the reflection reconstruction feature;
所述通道注意力权重生成模块用于利用背景重构特征生成背景通道注意力权重,利用反射重构特征生成反射通道注意力权重;The channel attention weight generation module is used to generate the background channel attention weight by using the background reconstruction feature, and use the reflection reconstruction feature to generate the reflection channel attention weight;
所述重构特征调整模块用于利用所述的背景空间注意力权重和背景通道注意力权重调整背景重构特征,获取调整后的背景重构特征;利用所述的反射空间注意力权重和反射通道注意力权重调整反射重构特征,获取调整后的反射重构特征;The reconstruction feature adjustment module is used to adjust the background reconstruction feature by using the background space attention weight and the background channel attention weight to obtain the adjusted background reconstruction feature; using the reflection space attention weight and reflection The channel attention weight adjusts the reflection reconstruction feature, and obtains the adjusted reflection reconstruction feature;
所述图像分离模块用于利用U型网络从调整后的背景重构特征重构背景图像,利用U型网络从调整后的反射重构特征重构反射图像。The image separation module is used for reconstructing the background image from the adjusted background reconstruction features by using the U-shaped network, and reconstructing the reflection image from the adjusted reflection reconstruction features by using the U-shaped network.
本发明具有以下有益效果:The present invention has the following beneficial effects:
本发明实施例光场图像的反射分离方法,利用带反射的光场图像自动分离出背景图像和反射图像,获得真实的场景下更加清晰、独立的背景图像和反射图像,图像具有更加丰富的恢复细节,并且处理速度也得到巨大的提高。本发明实施例可以应用于多种常见计算机视觉任务的图像预处理,例如:目标检测、目标跟踪、三维重建等,在不增加图像处理时间的前提下,提高了图像处理效果,提高了图像的定性和定量分析效果,例如,所分离出来的背景图像可用于更加准确的目标检测、目标跟踪、三维重建,分离出的反射图像可用于刑侦领域案件的侦破等方面。The reflection separation method of the light field image in the embodiment of the present invention uses the light field image with reflection to automatically separate the background image and the reflection image, so as to obtain a clearer and independent background image and reflection image in a real scene, and the image has a richer restoration. details, and the processing speed has also been greatly improved. The embodiments of the present invention can be applied to image preprocessing for various common computer vision tasks, such as: target detection, target tracking, 3D reconstruction, etc., without increasing the image processing time, the image processing effect is improved, and the image quality is improved. Qualitative and quantitative analysis effects, for example, the separated background image can be used for more accurate target detection, target tracking, 3D reconstruction, and the separated reflection image can be used for the detection of cases in the criminal investigation field.
附图说明Description of drawings
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without any creative effort.
图1所示为本发明实施例光场图像的反射分离方法的流程图;FIG. 1 is a flowchart of a reflection separation method of a light field image according to an embodiment of the present invention;
图2所示为本发明实施例中重聚焦图像的生成示例图;FIG. 2 shows an example diagram of the generation of a refocusing image in an embodiment of the present invention;
图3所示为本发明实施例光场图像的反射分离的原理图。FIG. 3 is a schematic diagram of reflection separation of a light field image according to an embodiment of the present invention.
具体实施方式Detailed ways
下面通过参考示范性实施例,对本发明技术问题、技术方案和优点进行详细阐明。以下所述示范性实施例仅用于解释本发明,而不能解释为对本发明的限制。本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语)具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非在这里进行定义,否则不会用理想化或过于正式的含义来解释。The technical problems, technical solutions and advantages of the present invention will be explained in detail below by referring to the exemplary embodiments. The exemplary embodiments described below are only for explaining the present invention, and should not be construed as limiting the present invention. It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It should also be understood that terms such as those defined in general dictionaries should be understood to have meanings consistent with their meanings in the context of the prior art, and not in idealized or overly formal meanings unless defined herein to explain.
本发明提供了一种光场图像的反射分离方法,获取带反射的光场图像后分离出背景图像和反射图像。所述的反射分离方法利用光场图像丰富的角度空间信息自动准确地分离出背景图像和反射图像,其中反射图像指的是反射形成的干扰图像。与现有的技术相比,具有分离效果优秀且处理速度快的优点。The invention provides a reflection separation method of a light field image, which can separate a background image and a reflection image after acquiring a light field image with reflection. The reflection separation method utilizes the rich angular space information of the light field image to automatically and accurately separate the background image and the reflection image, wherein the reflection image refers to the interference image formed by reflection. Compared with the existing technology, it has the advantages of excellent separation effect and fast processing speed.
下面结合附图,通过具体的实施例对本发明作进一步详细的说明。Below in conjunction with the accompanying drawings, the present invention will be further described in detail through specific embodiments.
本实施例提供了一种光场图像的反射分离方法,将输入的带反射的光场图像进行分离,输出其中心视角的背景图像和反射图像。This embodiment provides a reflection separation method for a light field image, which separates an input light field image with reflection, and outputs a background image and a reflection image at a central viewing angle.
图1为本实施例光场图像的反射分离方法流程图。如图1和图3所示,本实施例所述光场图像的反射分离方法,包括如下步骤:FIG. 1 is a flow chart of a reflection separation method for a light field image in this embodiment. As shown in FIG. 1 and FIG. 3 , the reflection separation method of the light field image in this embodiment includes the following steps:
步骤S1,获取原始光场图像,进行多尺度空间角度卷积,获取多尺度空间角度特征。In step S1, the original light field image is obtained, and multi-scale spatial angle convolution is performed to obtain multi-scale spatial angle features.
本步骤中,对光场图像进行多尺度空间角度卷积,具体包括:首先利用卷积核大小为3*3、步长为2、填充为1的空间2D卷积进行下采样操作,分别使用1次和2次,获取1/2原始分辨率和1/4原始分辨率的图像;然后将原始、 1/2原始分辨率和1/4原始分辨率图像分别进行4次空间2D卷积和4次角度2D 卷积,获取3个不同尺度的特征,这里的4次空间2D卷积和前两次角度2D卷积的卷积核大小为3*3、步长为1、填充为1,后两次角度2D卷积的卷积核大小为3*3、步长为1、填充为0。所述空间卷积是将上述图像在x,y维度上进行卷积,角度卷积是将上述图像在u,v维度上进行卷积。在本实施例中,卷积操作可以用如下公式表示:In this step, the multi-scale spatial angle convolution is performed on the light field image, which specifically includes: first, using a spatial 2D convolution with a convolution kernel size of 3*3, a stride of 2, and a padding of 1 to perform downsampling operations, respectively using 1 and 2 times to obtain images of 1/2 original resolution and 1/4 original resolution; Four angular 2D convolutions to obtain three features of different scales. The convolution kernel size of the four spatial 2D convolutions and the first two angular 2D convolutions is 3*3, the stride is 1, and the padding is 1. The kernel size of the last two angular 2D convolutions is 3*3, the stride is 1, and the padding is 0. The spatial convolution is to convolve the above image in the x, y dimensions, and the angular convolution is to convolve the above image in the u, v dimensions. In this embodiment, the convolution operation can be expressed by the following formula:
式(1)中,是卷积操作,W表示权重矩阵,B表示偏置矩阵,X是输入, Y是输出,h是激活函数。优选地,在本实施例中,所有的激活函数都采用ReLU 函数,可以表示为:In formula (1), is the convolution operation, W is the weight matrix, B is the bias matrix, X is the input, Y is the output, and h is the activation function. Preferably, in this embodiment, all activation functions adopt the ReLU function, which can be expressed as:
h(x)=max (0,x) (2)h(x)=max(0,x) (2)
之后,通过利用1个空间2D卷积将不同尺度的特征融合为多尺度空间角度特征。这里的尺度,即为分辨率;所述空间2D卷积的卷积核大小为3*3、步长为1、填充为1。After that, the features of different scales are fused into multi-scale spatial angle features by utilizing 1 spatial 2D convolution. The scale here is the resolution; the size of the convolution kernel of the spatial 2D convolution is 3*3, the stride is 1, and the padding is 1.
需要说明的是,2D卷积包括空间2D卷积和角度2D卷积,当对两种卷积方式无限制时,采用2D卷积的描述,根据需要进行空间2D卷积和/或角度2D卷积。It should be noted that 2D convolution includes spatial 2D convolution and angular 2D convolution. When there are no restrictions on the two convolution methods, the description of 2D convolution is used, and spatial 2D convolution and/or angular 2D convolution are performed as needed. product.
步骤S2,通过对原始光场图像进行重聚焦操作,获取重聚焦图像。Step S2, by performing a refocusing operation on the original light field image, a refocusing image is obtained.
本步骤中,重聚焦操作可以表示为:In this step, the refocusing operation can be expressed as:
式(3)中,d表示重聚焦操作的数值,L表示光场图像,u,v表示角度维,x,y 表示空间维,uc,vc表示中心视角的角度坐标,u∈[0,U],v∈[0,V],U和V是两个常数,Id(x,y)表示使用数值进行重聚焦操作后的重聚焦图像。图2是重聚焦图像的生成示例图。In formula (3), d represents the value of the refocusing operation, L represents the light field image, u, v represent the angular dimension, x, y represent the spatial dimension, u c , v c represent the angular coordinates of the central viewing angle, u∈[0 ,U], v∈[0,V], U and V are two constants, and I d (x, y) represents the refocusing image after the refocusing operation using numerical values. FIG. 2 is a diagram showing an example of generation of a refocusing image.
经过上述重聚焦操作,所获取的重聚焦图像一般为多张。优选地,在本实施例中d∈[-2,2],步长为0.4,一共生成了11张重聚焦图像。After the above-mentioned refocusing operation, the acquired refocusing images are generally multiple. Preferably, in this embodiment, d∈[-2,2], the step size is 0.4, and a total of 11 refocusing images are generated.
步骤S3,对所述的重聚焦图像进行动态卷积操作,分别生成初步背景图像和初步反射图像。In step S3, a dynamic convolution operation is performed on the refocusing image, and a preliminary background image and a preliminary reflection image are respectively generated.
本步骤中,对所述重聚焦图像进行2次不同的动态卷积,通过5个串联的 3D卷积实现,分别生成背景动态卷积核和反射动态卷积核,利用背景动态卷积核从所述重聚焦图像中生成初步背景图像,利用反射动态卷积核从所述重聚焦图像中生成初步反射图像。具体公式如下:In this step, two different dynamic convolutions are performed on the refocusing image, which is realized by 5 serial 3D convolutions, and a background dynamic convolution kernel and a reflection dynamic convolution kernel are respectively generated. A preliminary background image is generated from the refocused image, and a preliminary reflected image is generated from the refocused image using a reflection dynamic convolution kernel. The specific formula is as follows:
Binter(x,y)=∑dI(d,x,y)*WB(d,x,y) (4)B inter (x,y)=∑ d I(d,x,y)*W B (d,x,y) (4)
Rinter(x,y)=∑dI(d,x,y)*WR(d,x,y) (5)R inter (x,y)=∑ d I(d,x,y)*W R (d,x,y) (5)
式(4)、(5)中,d表示重聚焦操作的数值,x,y表示空间维,I(d,x,y)表示所有重聚焦图像,Binter(x,y)表示初步的背景图像,WB(d,x,y)表示背景动态卷积核,Rinter(x,y)表示初步的反射图像,WR(d,x,y)表示反射动态卷积核。In equations (4) and (5), d represents the value of the refocusing operation, x, y represent the spatial dimension, I(d, x, y) represents all refocusing images, and B inter (x, y) represents the preliminary background Image, WB (d,x,y) represents the background dynamic convolution kernel, R inter (x,y) represents the preliminary reflection image, and W R ( d,x,y) represents the reflection dynamic convolution kernel.
步骤S4,分别对所述初步背景图像和初步反射图像进行卷积,获得背景边缘图、背景重聚焦特征、反射边缘图、反射重聚焦特征。Step S4: Convolve the preliminary background image and the preliminary reflection image respectively to obtain a background edge map, a background refocusing feature, a reflection edge map, and a reflection refocusing feature.
本步骤中,对所述初步背景图像进行3次空间2D卷积,获得背景边缘图、背景重聚焦特征,对所述初步反射图像进行3次空间2D卷积,获得反射边缘图、反射重聚焦特征。为了更好的获取边缘图,引入了边缘图的L1损失进行监督。In this step, perform three spatial 2D convolutions on the preliminary background image to obtain background edge maps and background refocusing features, and perform three spatial 2D convolutions on the preliminary reflected images to obtain reflected edge maps and reflection refocusing feature. In order to better obtain the edge map, the L1 loss of the edge map is introduced for supervision.
优选地,本步骤中所述空间2D卷积的卷积核大小为3*3、步长为1、填充为1。Preferably, the size of the convolution kernel of the spatial 2D convolution in this step is 3*3, the step size is 1, and the padding is 1.
步骤S5,基于空间注意力机制,利用所述背景边缘图生成背景空间注意力权重,利用所述反射边缘图生成反射空间注意力权重。Step S5, based on the spatial attention mechanism, use the background edge map to generate background spatial attention weights, and use the reflected edge map to generate reflective spatial attention weights.
本步骤中,将背景边缘图输入1个空间2D卷积中,将其通道为变成1,获得背景空间注意力权重;同时将反射边缘图输入1个空间2D卷积中,将其通道为变成1,获得反射空间注意力权重。In this step, the background edge map is input into a spatial 2D convolution, and its channel is changed to 1 to obtain the background spatial attention weight; at the same time, the reflected edge map is input into a spatial 2D convolution, and its channel is becomes 1 to get the reflective spatial attention weight.
优选地,本步骤中所述空间2D卷积的卷积核大小为3*3、步长为1、填充为1。Preferably, the size of the convolution kernel of the spatial 2D convolution in this step is 3*3, the step size is 1, and the padding is 1.
步骤S6,合并所述的多尺度空间角度特征和背景重聚焦特征,获取背景重构特征,合并所述的多尺度空间角度特征和反射重聚焦特征,获取反射重构特征。Step S6, combining the multi-scale space angle feature and the background refocusing feature to obtain the background reconstruction feature, and combining the multi-scale space angle feature and the reflection refocusing feature to obtain the reflection reconstruction feature.
本步骤中,将多尺度空间角度特征和背景重聚焦特征输入到一个1个空间 2D卷积中,获取背景重构特征;同时将多尺度空间角度特征和反射重聚焦特征输入到另一个空间2D卷积中,获取反射重构特征。In this step, the multi-scale spatial angle feature and the background refocusing feature are input into one spatial 2D convolution to obtain the background reconstruction feature; at the same time, the multi-scale spatial angle feature and the reflection refocusing feature are input into another spatial 2D convolution In the convolution, the reflection reconstruction features are obtained.
优选地,本步骤中所述空间2D卷积的卷积核大小为3*3、步长为1、填充为1。Preferably, the size of the convolution kernel of the spatial 2D convolution in this step is 3*3, the step size is 1, and the padding is 1.
步骤S7,基于通道注意力机制,利用所述的背景重构特征生成背景通道注意力权重,利用所述的反射重构特征生成反射通道注意力权重。In step S7, based on the channel attention mechanism, the background channel attention weight is generated by using the background reconstruction feature, and the reflection channel attention weight is generated by using the reflection reconstruction feature.
本步骤中,背景重构特征进行全局最大池化操作和全局平均池化操作,其空间分辨率变为1*1,获取背景最大池化特征和背景平均池化特征,将所述的背景最大池化特征和背景平均池化特征输入全连接层,获得背景通道注意力权重;类似的,将反射重构特征进行全局最大池化操作和全局平均池化操作,将其空间分辨率变为1*1,获取反射最大池化特征和反射平均池化特征,将所述的反射最大池化特征和反射平均池化特征输入全连接层,获得反射通道注意力权重。In this step, the global maximum pooling operation and the global average pooling operation are performed on the background reconstruction feature, and its spatial resolution becomes 1*1, the background maximum pooling feature and the background average pooling feature are obtained, and the background maximum pooling feature and the background average pooling feature are obtained. The pooling feature and background average pooling feature are input into the fully connected layer to obtain the attention weight of the background channel; similarly, the global maximum pooling operation and the global average pooling operation are performed on the reflection reconstruction feature to change its spatial resolution to 1 *1. Obtain the reflection maximum pooling feature and the reflection average pooling feature, and input the reflection maximum pooling feature and the reflection average pooling feature into the fully connected layer to obtain the reflection channel attention weight.
步骤S8,利用所述的背景空间注意力权重和背景通道注意力权重调整背景重构特征,获取调整后的背景重构特征;利用所述的反射空间注意力权重和反射通道注意力权重调整反射重构特征,获取调整后的反射重构特征。Step S8, utilize the background space attention weight and the background channel attention weight to adjust the background reconstruction feature, and obtain the adjusted background reconstruction feature; use the reflection space attention weight and the reflection channel attention weight to adjust the reflection Reconstruct the feature to obtain the adjusted reflection reconstruction feature.
本步骤中,按照公式(6)(7)获取调整后的重构特征:In this step, the adjusted reconstruction features are obtained according to formulas (6) and (7):
式(4)、(5)中,x,y表示空间维,CAB(x,y)表示背景通道注意力权重,In equations (4) and (5), x, y represent the spatial dimension, CA B (x, y) represents the background channel attention weight,
SAB(x,y)表示背景空间注意力权重,FB(x,y)表示背景重构特征,表示调整后的背景重构特征;CAR(x,y)表示反射通道注意力权重,SAR(x,y)表示反射空间注意力权重,FR(x,y)表示反射重构特征,表示调整后的反射重构特征。SA B (x, y) represents the background spatial attention weight, F B (x, y) represents the background reconstruction feature, Represents the adjusted background reconstruction features; CAR (x,y) represents the reflection channel attention weight, SAR (x,y) represents the reflection spatial attention weight, F R ( x,y) represents the reflection reconstruction feature, Represents the adjusted reflection reconstruction feature.
步骤S9,利用U型网络从调整后的背景重构特征重构背景图像,利用U型网络从调整后的反射重构特征重构反射图像。In step S9, the background image is reconstructed from the adjusted background reconstruction feature by using a U-shaped network, and the reflection image is reconstructed from the adjusted reflection reconstruction feature by using the U-shaped network.
本步骤中,背景重构特征和反射重构特征分别输入到两个不同的U型网络, U型网络包括编码层、解码层和跳层连接,所述编码层由3组包含空间2D卷积的模块组成,这里的空间2D卷积包含1和2两种步长;所述解码层由3组包含空间2D卷积和转置卷积的模块组成;所述跳层结构用于连接编码层和解码层。在训练时对背景图像和反射图像添加了L1损失监督;最终,背景图像和反射图像从U型网络中获取。利用U型网络获取背景图像和反射图像时,所述编码层执行2次下采样操作(卷积核大小为3*3、步长为2、填充为1的空间2D卷积)和3次空间2D卷积(卷积核大小为3*3、步长为1、填充为1);解码层执行2 次上采样操作(转置卷积)和3次空间2D卷积(卷积核大小为3*3、步长为1、填充为1)。In this step, the background reconstruction feature and the reflection reconstruction feature are respectively input to two different U-shaped networks. The U-shaped network includes an encoding layer, a decoding layer and a skip layer connection. The encoding layer consists of three groups including spatial 2D convolution The spatial 2D convolution here includes two steps of 1 and 2; the decoding layer is composed of 3 groups of modules including spatial 2D convolution and transposed convolution; the skip layer structure is used to connect the encoding layer and decoding layer. L1 loss supervision is added to the background image and reflection image during training; finally, the background image and reflection image are obtained from the U-shaped network. When using the U-shaped network to obtain the background image and the reflection image, the encoding layer performs 2 downsampling operations (spatial 2D convolution with
由以上可以看出,本发明实施例的所述光场图像的反射分离方法,仅通过一张原始的带反射的光场图像,就能自动地得到一张背景图像和反射图像;分离效果好,处理速度快,对真实的复杂场景的鲁棒性高,所获得的背景图像和反射图像独立性强且细节丰富,可以广泛用于常见的计算机视觉任务以提高其处理性能,例如:目标检测、目标跟踪、三维重建等,例如,所分离出来的背景图像可用于更加准确的目标检测和跟踪,分离出的反射图像可用于刑侦领域案件的侦破等方面。It can be seen from the above that the reflection separation method of the light field image according to the embodiment of the present invention can automatically obtain a background image and a reflection image only through an original light field image with reflection; the separation effect is good. , the processing speed is fast, the robustness to real complex scenes is high, the obtained background images and reflection images are independent and rich in details, and can be widely used in common computer vision tasks to improve their processing performance, such as: target detection , target tracking, three-dimensional reconstruction, etc. For example, the separated background image can be used for more accurate target detection and tracking, and the separated reflection image can be used for the detection of cases in the criminal investigation field.
基于同样的思想,本发明实施例还提供了一种光场图像的反射分离系统,所述系统包括:光场图像获取模块、多尺度空间角度特征生成模块、重聚焦模块、初步分离图像生成模块、边缘图生成模块、空间注意力权重生成模块、重聚焦特征生成模块、重构特征生成模块、通道注意力权重生成模块、重构特征调整模块、图像分离模块。Based on the same idea, an embodiment of the present invention also provides a light field image reflection separation system, the system includes: a light field image acquisition module, a multi-scale spatial angle feature generation module, a refocusing module, and a preliminary separation image generation module , edge map generation module, spatial attention weight generation module, refocusing feature generation module, reconstruction feature generation module, channel attention weight generation module, reconstruction feature adjustment module, image separation module.
其中,所述光场图像获取模块用于获取原始光场图像;Wherein, the light field image acquisition module is used to acquire the original light field image;
所述多尺度空间角度特征生成模块用于对原始光场图像进行多尺度空间角度卷积,获取多尺度空间角度特征,并发送给重构特征生成模块;The multi-scale spatial angle feature generation module is used to perform multi-scale spatial angle convolution on the original light field image, obtain multi-scale spatial angle features, and send them to the reconstructed feature generation module;
所述重聚焦模块用于对原始的光场图像进行重聚焦操作,获取重聚焦图像;The refocusing module is used for performing a refocusing operation on the original light field image to obtain the refocusing image;
所述初步分离图像生成模块用于对重聚焦图像进行动态卷积操作,分别生成初步背景图像和初步反射图像;The preliminary separated image generation module is used to perform a dynamic convolution operation on the refocusing image to generate a preliminary background image and a preliminary reflection image respectively;
所述边缘图生成模块用于分别对初步背景图像和初步反射图像进行卷积,获得背景边缘图和反射边缘图;The edge map generation module is used to convolve the preliminary background image and the preliminary reflected image respectively to obtain the background edge map and the reflected edge map;
所述空间注意力权重生成模块用于基于空间注意力机制,利用所述背景边缘图生成背景空间注意力权重,利用所述反射边缘图生成反射空间注意力权重;The spatial attention weight generation module is configured to generate background spatial attention weights by using the background edge map based on the spatial attention mechanism, and generate reflective spatial attention weights by using the reflected edge map;
所述重聚焦特征生成模块用于分别对所述初步背景图像和初步反射图像进行卷积,获得背景重聚焦特征和反射重聚焦特征;The refocusing feature generation module is configured to convolve the preliminary background image and the preliminary reflection image respectively to obtain the background refocusing feature and the reflection refocusing feature;
所述重构特征生成模块用于合并多尺度空间角度特征和背景重聚焦特征,获取背景重构特征,合并所述的多尺度空间角度特征和反射重聚焦特征,获取反射重构特征;The reconstruction feature generation module is used for combining the multi-scale spatial angle feature and the background refocusing feature to obtain the background reconstruction feature, and combining the multi-scale spatial angle feature and the reflection refocusing feature to obtain the reflection reconstruction feature;
所述通道注意力权重生成模块用于利用背景重构特征生成背景通道注意力权重,利用反射重构特征生成反射通道注意力权重;The channel attention weight generation module is used to generate the background channel attention weight by using the background reconstruction feature, and use the reflection reconstruction feature to generate the reflection channel attention weight;
所述重构特征调整模块用于利用所述的背景空间注意力权重和背景通道注意力权重调整背景重构特征,获取调整后的背景重构特征;利用所述的反射空间注意力权重和反射通道注意力权重调整反射重构特征,获取调整后的反射重构特征;The reconstruction feature adjustment module is used to adjust the background reconstruction feature by using the background space attention weight and the background channel attention weight to obtain the adjusted background reconstruction feature; using the reflection space attention weight and reflection The channel attention weight adjusts the reflection reconstruction feature, and obtains the adjusted reflection reconstruction feature;
所述图像分离模块用于利用U型网络从调整后的背景重构特征重构背景图像,利用U型网络从调整后的反射重构特征重构反射图像。The image separation module is used for reconstructing the background image from the adjusted background reconstruction features by using the U-shaped network, and reconstructing the reflection image from the adjusted reflection reconstruction features by using the U-shaped network.
本实施例中各模块通过处理器实现,当需要存储时适当增加存储器。其中,所述处理器可以是但不限于微处理器MPU、中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)、其他可编程逻辑器件、分立门、晶体管逻辑器件、分立硬件组件等。所述存储器可以包括随机存取存储器 (Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。In this embodiment, each module is implemented by a processor, and memory is appropriately added when storage is required. The processor may be, but not limited to, a microprocessor MPU, a central processing unit (CPU), a network processor (NP), a digital signal processor (DSP), and an application-specific integrated circuit (ASIC). ), field programmable gate arrays (FPGA), other programmable logic devices, discrete gates, transistor logic devices, discrete hardware components, etc. The memory may include random access memory (Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory. Optionally, the memory may also be at least one storage device located away from the aforementioned processor.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
另外需要说明的是,本实施例所述光场图像的反射分离系统与所述反射分离方法是对应的,对所述方法的描述与限定,同样适用于所述系统,在此不再赘述。It should also be noted that the light field image reflection separation system in this embodiment corresponds to the reflection separation method, and the description and limitations of the method are also applicable to the system, and are not repeated here.
以上所述是本发明的优选实施方式,应当指出,本发明并不受限于以上所公开的示范性实施例,说明书的实质仅仅是帮助相关领域技术人员综合理解本发明的具体细节。对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,在本发明揭露的技术范围做出的若干改进和润饰、可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。The above descriptions are the preferred embodiments of the present invention. It should be noted that the present invention is not limited to the exemplary embodiments disclosed above, and the essence of the description is only to help those skilled in the relevant art to comprehensively understand the specific details of the present invention. For those of ordinary skill in the art, without departing from the principles of the present invention, several improvements and modifications, and easily conceivable changes or substitutions made within the technical scope of the present invention should be covered in the within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210089918.2A CN114529573B (en) | 2022-01-25 | 2022-01-25 | A reflection separation method and system for light field images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210089918.2A CN114529573B (en) | 2022-01-25 | 2022-01-25 | A reflection separation method and system for light field images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114529573A true CN114529573A (en) | 2022-05-24 |
CN114529573B CN114529573B (en) | 2025-04-01 |
Family
ID=81623067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210089918.2A Active CN114529573B (en) | 2022-01-25 | 2022-01-25 | A reflection separation method and system for light field images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114529573B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215879A (en) * | 2020-09-25 | 2021-01-12 | 北京交通大学 | Depth extraction method of light field polar plane image |
US20210267451A1 (en) * | 2018-07-06 | 2021-09-02 | The Johns Hopkins University | Computational lightfield ophthalmoscope |
CN113397472A (en) * | 2015-03-16 | 2021-09-17 | 奇跃公司 | Wearable augmented reality device and wearable virtual reality device |
-
2022
- 2022-01-25 CN CN202210089918.2A patent/CN114529573B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113397472A (en) * | 2015-03-16 | 2021-09-17 | 奇跃公司 | Wearable augmented reality device and wearable virtual reality device |
US20210267451A1 (en) * | 2018-07-06 | 2021-09-02 | The Johns Hopkins University | Computational lightfield ophthalmoscope |
CN112215879A (en) * | 2020-09-25 | 2021-01-12 | 北京交通大学 | Depth extraction method of light field polar plane image |
Non-Patent Citations (3)
Title |
---|
T. LI等: ""Improved multiple-image-based reflection removal algorithm using deep neural networks"", 《IEEE TRANS. IMAGE PROCESS》, vol. 30, 31 December 2021 (2021-12-31), pages 68 - 79 * |
ZEQI SHEN等: ""Light Field Reflection and Background Separation Network Based on Adaptive Focus Selection"", 《IEEE TRANSACTIONS ON COMPUTATION IMAGING》, 1 January 2023 (2023-01-01), pages 1 - 13 * |
吉勇等: ""水下光场成像清晰度增强研究"", 《电子测量与仪器学报》, vol. 35, no. 4, 30 April 2021 (2021-04-30), pages 66 - 72 * |
Also Published As
Publication number | Publication date |
---|---|
CN114529573B (en) | 2025-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101356546B (en) | Image high-resolution upgrading device, image high-resolution upgrading method image high-resolution upgrading system | |
Guo et al. | Deep spatial-angular regularization for light field imaging, denoising, and super-resolution | |
CN110428366A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
US10021340B2 (en) | Method and an apparatus for generating data representative of a light field | |
WO2011010438A1 (en) | Parallax detection apparatus, ranging apparatus and parallax detection method | |
KR20170005009A (en) | Generation and use of a 3d radon image | |
CN113034666B (en) | A stereo matching method based on pyramid disparity optimization cost calculation | |
CN114663578A (en) | Multi-target scene polarization three-dimensional imaging method based on deep learning | |
CN110880162A (en) | Snapshot spectral depth joint imaging method and system based on deep learning | |
CN110021043A (en) | A kind of scene depth acquisition methods based on Stereo matching and confidence spread | |
US20230345101A1 (en) | Heterogeneous Micro-optics Imaging Module and Method and Apparatus for Image Reconstruction Thereof | |
WO2022105615A1 (en) | 3d depth map construction method and apparatus, and ar glasses | |
Jin et al. | Light field super-resolution via attention-guided fusion of hybrid lenses | |
US20220028039A1 (en) | Image restoration method and device | |
CN110276831A (en) | Method and device for constructing three-dimensional model, equipment and computer-readable storage medium | |
Liu et al. | High quality depth map estimation of object surface from light-field images | |
CN109949242A (en) | Image dehazing model generation method and device, and image dehazing method and device | |
CN113379746B (en) | Image detection method, device, system, computing equipment and readable storage medium | |
CN117635444A (en) | Depth completion method, device and equipment based on radiation difference and spatial distance | |
CN117173070A (en) | Image processing fusion method and system based on FPGA | |
CN114119704A (en) | Light field image depth estimation method based on spatial pyramid pooling | |
CN114782507A (en) | A method and system for asymmetric binocular stereo matching based on unsupervised learning | |
CN114119685A (en) | A Multimodal Image Registration Method Based on Deep Learning | |
CN114529573A (en) | Reflection separation method and system of light field image | |
CN116778091A (en) | Deep learning multi-view three-dimensional reconstruction algorithm based on path aggregation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |