[go: up one dir, main page]

CN103096113B - Method of generating stereo image array of discrete view collection combined window intercept algorithm - Google Patents

Method of generating stereo image array of discrete view collection combined window intercept algorithm Download PDF

Info

Publication number
CN103096113B
CN103096113B CN201310051957.4A CN201310051957A CN103096113B CN 103096113 B CN103096113 B CN 103096113B CN 201310051957 A CN201310051957 A CN 201310051957A CN 103096113 B CN103096113 B CN 103096113B
Authority
CN
China
Prior art keywords
discrete
image
pattern matrix
discrete view
dvi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310051957.4A
Other languages
Chinese (zh)
Other versions
CN103096113A (en
Inventor
王世刚
吕源治
金福寿
王学军
赵岩
王小雨
李雪松
俞珏琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201310051957.4A priority Critical patent/CN103096113B/en
Publication of CN103096113A publication Critical patent/CN103096113A/en
Application granted granted Critical
Publication of CN103096113B publication Critical patent/CN103096113B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

离散视点采集结合窗截取算法的立体元图像阵列生成方法属立体图像生成技术领域,本发明包括下列步骤:采集离散视点图像阵列;计算拍摄对象在每幅离散视点图像中的位置;计算截取窗在离散视点图像阵列任意两幅水平相邻离散视点图像中的水平相对位移和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移;计算截取窗大小;计算截取窗右下角在离散视点图像阵列的第一行、第一列离散视点图像中的位置;对离散视点图像阵列进行截取生成子图像阵列;将子图像阵列转化成立体元图像阵列。本发明不受采集设备的限制,能生成实际景物的高分辨率立体元图像阵列,与传统的相机阵列直接采集法相比,本发明可大大降低拍摄成本和工作量。The method for generating a stereo element image array combining discrete viewpoint acquisition with a window interception algorithm belongs to the technical field of stereo image generation. The present invention includes the following steps: collecting a discrete viewpoint image array; calculating the position of an object in each discrete viewpoint image; The horizontal relative displacement of any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the vertical relative displacement of the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array; calculate the interception window size; calculate the interception window The position of the lower right corner in the first row and first column of the discrete viewpoint image array; the discrete viewpoint image array is intercepted to generate a sub-image array; the sub-image array is converted into a volume element image array. The invention is not limited by acquisition equipment, and can generate high-resolution stereo element image arrays of actual scenes. Compared with the traditional camera array direct acquisition method, the invention can greatly reduce the shooting cost and workload.

Description

离散视点采集结合窗截取算法的立体元图像阵列生成方法Stereo Element Image Array Generation Method Based on Discrete Viewpoint Acquisition and Window Clipping Algorithm

技术领域technical field

本发明属立体图像生成技术领域,具体涉及一种组合立体图像系统中立体元图像阵列的生成方法。The invention belongs to the technical field of stereoscopic image generation, and in particular relates to a method for generating stereoscopic element image arrays in a combined stereoscopic image system.

背景技术Background technique

长久以来,对显示世界视觉信息的获取主要来源于单摄像机捕获,而这种采集形式不能给人眼带来深度感、立体感以及对对象的全方位认识。随着相关学科的发展及新技术新需求的驱动,立体显示技术应运而生。立体显示技术主要包括利用两眼视差的立体显示技术和真实立体显示技术。利用两眼视差的立体显示技术又可分为裸眼显示方式以及使用辅助设备的显示方式,其中,裸眼显示方式主要以光栅显示为主,使用辅助设备的显示方式主要借用3D眼镜进行显示。目前,电影院中广泛采用两眼视差的立体显示技术,该技术容易实现而且成本低,但由于这种方法分别将左、右眼看到的图像传输给观众,强迫观众在大脑中产生立体感,很容易产生视觉疲劳,而且,该技术无法实现连续多视角的视差变化,因此并不是理想的立体显示方法;真实立体显示技术主要包括全息术、体显示技术和组合立体显示技术。真实立体显示技术能够在空间中重现被拍摄物体的全部信息,观众通过生理调节眼睛的焦距来获得立体感,不会产生视觉疲劳,因此,成为立体显示技术的发展趋向。相对于时空分辨率受限的全息术与体显示技术,组合立体显示技术能够使显示器在一个广泛的视野空间内还原被拍摄景物的空间结构及位置关系,观看者无需任何辅助设备即可身临其境的感受到全视差的立体景象,是新一代立体显示技术的重要组成部分。For a long time, the acquisition of visual information of the displayed world mainly comes from single-camera capture, and this acquisition form cannot bring the human eye a sense of depth, three-dimensionality, and all-round understanding of objects. Driven by the development of related disciplines and new demands for new technologies, stereoscopic display technology has emerged as the times require. Stereoscopic display technology mainly includes stereoscopic display technology utilizing binocular parallax and true stereoscopic display technology. The stereoscopic display technology using binocular parallax can be further divided into naked-eye display and display using auxiliary equipment. Among them, the naked-eye display mainly uses grating display, and the display using auxiliary equipment mainly uses 3D glasses for display. At present, stereoscopic display technology with binocular parallax is widely used in movie theaters. This technology is easy to implement and low in cost. However, because this method transmits the images seen by the left and right eyes to the audience separately, forcing the audience to have a stereoscopic impression in the brain, it is very difficult. It is easy to cause visual fatigue, and this technology cannot achieve continuous multi-view parallax changes, so it is not an ideal stereoscopic display method; real stereoscopic display technologies mainly include holography, volumetric display technology and combined stereoscopic display technology. True stereoscopic display technology can reproduce all the information of the object being photographed in space, and the audience can obtain a stereoscopic effect by physiologically adjusting the focal length of the eyes without visual fatigue. Therefore, it has become the development trend of stereoscopic display technology. Compared with holography and volumetric display technology with limited temporal and spatial resolution, combined stereoscopic display technology can enable the display to restore the spatial structure and positional relationship of the scene being photographed in a wide field of view, and the viewer can experience it without any auxiliary equipment. It is an important part of the new generation of stereoscopic display technology to feel the stereoscopic scene of full parallax.

组合立体成像系统主要包括采集和显示两部分,在采集过程中,如图1所示,当实际物体发出的光线通过透镜阵列而被记录在记录媒体上时,我们便得到了一个立体元图像阵列,阵列中的每个立体元图像分别记录了被拍摄物体不同位置、不同角度的图像信息。在显示过程中,如图2所示,立体元图像阵列通过高分辨率的平板显示器呈现在透镜阵列前面,透镜阵列将从立体元图像阵列上发出的光线汇聚成空间中真实存在的立体景象。The combined stereo imaging system mainly includes two parts: acquisition and display. During the acquisition process, as shown in Figure 1, when the light emitted by the actual object is recorded on the recording medium through the lens array, we get a stereo element image array , each stereo element image in the array respectively records the image information of different positions and angles of the object to be photographed. During the display process, as shown in Figure 2, the three-dimensional element image array is presented in front of the lens array through a high-resolution flat panel display, and the lens array converges the light emitted from the three-dimensional element image array into a real three-dimensional scene in space.

透镜阵列采集法是获得立体元图像阵列的最简单、直接的方法,该方法使用透镜阵列和单一的记录媒体直接拍摄3D对象的立体元图像阵列,然而,在实际应用中,该方法存在很多不足,例如,较低的显示分辨率,狭窄的观看视角以及较小的景深。针对上述问题,目前已提出很多改进的采集方法,J.-S Jang和B.Javidi等人提出了基于时分复用的MALT方法,该方法通过提高空间采样率来获得更多的立体元图像,从而有效地提高了重构图像的分辨率,同时,二者又提出了SAII方法,该方法不但可以提高显示分辨率,而且增大了观看视野。虽然上述方法在透镜阵列采集法的基础上不同程度地提高了组合立体成像系统的显示效果,但是,对于一个给定的图像记录媒体,立体元图像的分辨率与数量之间永远存在着不可调和的矛盾。使用相机阵列进行采集可以很好的解决这个矛盾,阵列中的每个相机分别记录拍摄场景的一个立体元图像。然而,随着显示分辨率的不断提高,相机阵列的规模也在不断增大,如果我们希望获得一个显示分辨率为1024×768,每个立体元中包含20×20个像素的立体元图像阵列,则需要1024×768个相机,显然,如此大规模的相机阵列既昂贵又不便于调节。鉴于透镜阵列采集法和相机阵列采集法存在的以上不足,我们需要寻找一种更有效、更适用于组合立体图像系统的立体元图像阵列的合成方法,这种方法既可以生成高分辨率的立体元图像阵列,同时,又不需要高昂的实验成本和复杂的调节工作。The lens array acquisition method is the simplest and direct method to obtain the stereoscopic element image array. This method uses the lens array and a single recording medium to directly capture the stereoscopic element image array of the 3D object. However, in practical applications, this method has many shortcomings. , for example, lower display resolution, narrow viewing angle, and smaller depth of field. In response to the above problems, many improved acquisition methods have been proposed. J.-S Jang and B.Javidi et al. proposed the MALT method based on time division multiplexing, which obtains more stereoscopic element images by increasing the spatial sampling rate. Therefore, the resolution of the reconstructed image is effectively improved. At the same time, the two propose the SAII method, which can not only improve the display resolution, but also increase the viewing field of view. Although the above method improves the display effect of the combined stereoscopic imaging system to varying degrees on the basis of the lens array acquisition method, for a given image recording medium, there is always an irreconcilable relationship between the resolution and the number of stereoscopic element images. contradiction. Using a camera array for acquisition can well resolve this contradiction, and each camera in the array records a stereoscopic element image of the shooting scene respectively. However, as the display resolution continues to increase, the scale of the camera array is also increasing. If we want to obtain a voxel image array with a display resolution of 1024×768 and each voxel contains 20×20 pixels , you need 1024×768 cameras. Obviously, such a large-scale camera array is expensive and inconvenient to adjust. In view of the above shortcomings of the lens array acquisition method and the camera array acquisition method, we need to find a more effective and more suitable method for synthesizing the stereo element image array of the combined stereo image system. This method can generate high-resolution stereo Meta-image arrays, at the same time, do not require high experimental costs and complex conditioning efforts.

发明内容Contents of the invention

本发明的目的在于提供一种离散视点采集结合窗截取算法的立体元图像阵列生成方法,实现真实景物的立体显示。The object of the present invention is to provide a method for generating a stereoscopic element image array combining discrete viewpoint acquisition with a window interception algorithm, so as to realize the stereoscopic display of real scenes.

本发明包括下列步骤:The present invention comprises the following steps:

1.采集离散视点图像阵列,包括下列步骤:1. Collect discrete viewpoint image arrays, comprising the following steps:

1.1初始化:调节支撑立体摄影轨道的两个三角支架将立体摄影轨道的高度固定到一个较低的位置,将相机安装在立体摄影轨道上,立体摄影轨道可以使相机从右至左匀速移动,实验人员通过遥控器控制立体摄影轨道的启动和停止;1.1 Initialization: Adjust the two tripod brackets supporting the stereo photography track to fix the height of the stereo photography track to a lower position, install the camera on the stereo photography track, the stereo photography track can make the camera move at a constant speed from right to left, experiment The personnel control the start and stop of the stereo photography track through the remote control;

1.2获取离散视点图像组:利用水平尺将立体摄影轨道的平面调整到与水平面平行,并用遥控器将相机移动到立体摄影轨道的最右端,按住相机快门线并同时启动立体摄影轨道,在相机的移动过程中,利用相机的连拍功能连续采集拍摄对象的多张离散视点图像,将采集到的离散视点图像按照拍摄的先后顺序从左到右排成一行,即得到立体摄影轨道位于该高度时采集到的离散视点图像组;1.2 Acquire discrete viewpoint image groups: use the level ruler to adjust the plane of the stereographic track to be parallel to the horizontal plane, and use the remote control to move the camera to the rightmost end of the stereographic track, hold down the camera shutter line and start the stereographic track at the same time, During the moving process, use the continuous shooting function of the camera to continuously collect multiple discrete viewpoint images of the subject, and arrange the collected discrete viewpoint images in a row from left to right in the order of shooting, that is, the stereo photography track is located at this height The discrete viewpoint image group collected at time;

1.3利用垂直标尺,上调支撑立体摄影轨道的两个三角支架,使立体摄影轨道上升一个固定的距离;1.3 Use the vertical ruler to raise the two tripod brackets supporting the stereoscopic photography track to raise the stereoscopic photography track by a fixed distance;

1.4重复步骤1.1.2和1.1.3的拍摄过程,获得多个离散视点图像组;1.4 Repeat the shooting process of steps 1.1.2 and 1.1.3 to obtain multiple discrete viewpoint image groups;

1.5将所有离散视点图像组按照采集的先后顺序,从上至下排列成一个离散视点图像阵列,即,将最先采集到的一组离散视点图像组放在离散视点图像阵列的第一行,最后采集到的一组离散视点图像组放在离散视点图像阵列的最后一行;1.5 Arrange all the discrete viewpoint image groups from top to bottom according to the order of collection, that is, put the first group of discrete viewpoint image groups collected first in the first row of the discrete viewpoint image array, The last group of discrete viewpoint image groups collected is placed in the last row of the discrete viewpoint image array;

2.计算拍摄对象在每幅离散视点图像中的位置:2. Calculate the position of the subject in each discrete viewpoint image:

拍摄对象在每幅离散视点图像中的位置参数包括:拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移,拍摄对象在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,拍摄对象的上、下、左和右侧边界在离散视点图像阵列的第一行、第一列离散视点图像中的位置,拍摄对象的上、下、左和右侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置;The position parameters of the object in each discrete viewpoint image include: the horizontal relative displacement of the object in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array, The vertical relative displacement in the adjacent discrete viewpoint image, the position of the upper, lower, left and right boundaries of the object in the first row and the first column of the discrete viewpoint image array, the upper, lower, left and right positions of the object and the position of the right boundary in the last row and last column of discrete viewpoint images of the discrete viewpoint image array;

2.1确定拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移,其计算方法为:设DVIi,j表示离散视点图像阵列中位于第i列,第j行的一幅离散视点图像,首先,将DVI1,1逐个像素地向右平移,每次平移后,计算平移后的DVI1,1与DVI2,1重叠部分的峰值信噪比,峰值信噪比定义为:2.1 Determine the horizontal relative displacement of the shooting object in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array . For a discrete viewpoint image, first, DVI 1,1 is shifted pixel by pixel to the right, after each shift, calculate the peak signal-to-noise ratio of the overlapped part of DVI 1,1 and DVI 2,1 after shifting, peak signal-to-noise ratio defined as:

PSNRPSNR (( sthe s )) == 1010 ×× loglog 1010 [[ 255255 22 MSEMSE (( sthe s )) ]]

式中,s-为DVI1,1的平移距离,PSNR(s)-为平移后的DVI1,1与DVI2,1重叠部分的峰值信噪比,MSE(s)-为均方误差,其定义式为:In the formula, s-is the translation distance of DVI 1,1 , PSNR(s)-is the peak signal-to-noise ratio of the overlapping part of DVI 1,1 and DVI 2,1 after translation, MSE(s)-is the mean square error, Its definition is:

MSEMSE (( sthe s )) == 11 (( Xx -- sthe s )) ×× YY ΣΣ xx == 00 Xx -- 11 -- sthe s ΣΣ ythe y == 00 YY -- 11 [[ DVIDVI 1,11,1 (( xx ,, ythe y )) -- DVIDVI 2,12,1 (( xx ++ sthe s ,, ythe y )) ]] 22

式中,x、y-分别为像素点在DVI1,1中的横向和纵向位置坐标,X、Y-分别为离散视点图像的水平和垂直方向包含的像素数量;In the formula, x, y-are the horizontal and vertical position coordinates of pixels in DVI 1,1, respectively, and X, Y-are the number of pixels contained in the horizontal and vertical directions of the discrete viewpoint image, respectively;

然后,将峰值信噪比最大时对应的移动距离作为拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移;Then, the moving distance corresponding to the maximum peak signal-to-noise ratio is taken as the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array;

2.2确定拍摄对象在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,其计算方法为:首先将DVI1,1和DVI1,2进行转置,然后按照与拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移相同的计算方法进行计算;2.2 Determine the vertical relative displacement of the subject in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array. The calculation method is: first transpose DVI 1,1 and DVI 1,2 , and then follow the Calculate the horizontal relative displacement in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array using the same calculation method;

2.3确定拍摄对象的上、下、左和右侧边界在DVI1,1中的位置,其计算方法为:首先,计算DVI1,1和DVI2,1的差值图像并进行中值滤波;然后,将中值滤波后的差值图像中所有非零点的上、下和左边界的位置作为拍摄对象的上、下和左侧边界在DVI1,1中的位置,将中值滤波后的差值图像中所有非零点的右边界的位置减去拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移后得到的值作为拍摄对象的右侧边界在DVI1,1中的位置;2.3 Determine the position of the upper, lower, left and right boundaries of the subject in DVI 1,1 , the calculation method is: first, calculate the difference image of DVI 1,1 and DVI 2,1 and perform median filtering; Then, the positions of the upper, lower and left boundaries of all non-zero points in the difference image after median filtering are used as the positions of the upper, lower and left boundaries of the subject in DVI 1, 1 , and the median filtered The value obtained after subtracting the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array from the position of the right boundary of all non-zero points in the difference image is taken as the right boundary of the subject in DVI 1 , the position in 1 ;

2.4确定拍摄对象的上、下、左和右侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置,其计算方法为:首先,计算离散视点图像阵列中最后一行、最后一列的离散视点图像和离散视点图像阵列中最后一行、倒数第二列的离散视点图像的差值图像并进行中值滤波;然后,将中值滤波后的差值图像中所有非零点的上、下和右边界的位置作为拍摄对象的上、下和右侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置,将中值滤波后的差值图像中所有非零点的左边界的位置加上拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移后得到的值作为拍摄对象的左侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置;2.4 Determine the position of the upper, lower, left and right boundaries of the shooting object in the last row and last column of discrete viewpoint images in the discrete viewpoint image array, the calculation method is: first, calculate the last row and last column in the discrete viewpoint image array The difference image of the discrete viewpoint image in the discrete viewpoint image and the discrete viewpoint image in the last row and the penultimate column in the discrete viewpoint image array and perform median filtering; and the position of the right boundary are taken as the positions of the upper, lower and right boundaries of the subject in the last row and last column of discrete viewpoint images of the discrete viewpoint image array, and the left borders of all non-zero points in the median-filtered difference image The position of the subject plus the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array is the value obtained as the left boundary of the subject in the last row and last column of the discrete viewpoint image array Discrete viewpoint position in the image;

3.计算截取窗在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移:3. Calculate the horizontal relative displacement of the interception window in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the vertical relative displacement of the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array:

截取窗在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移的表达式为:The expressions of the horizontal relative displacement of the interception window in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the vertical relative displacement of the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array are:

MH=DH+deltaMH=DH+delta

MV=DV+deltaMV=DV+delta

式中,MH、MV-分别为截取窗在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,DH、DV-分别为拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移和拍摄对象在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,delta-为深度影响因子;In the formula, MH and MV- are the horizontal relative displacement of the interception window in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array, respectively. , DH, DV- are the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the horizontal relative displacement of the subject in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array The vertical relative displacement in the image, delta- is the depth factor;

根据实际需要,delta可以在允许的范围内进行任意取值,delta的取值范围为:According to actual needs, delta can take any value within the allowable range, and the value range of delta is:

delta_max=min[(MH_max-DH),(MV_max-DV)]delta_max=min[(MH_max-DH), (MV_max-DV)]

delta_min=0delta_min=0

式中,delta_max、delta_min-分别为delta可取的最大值和最小值,MH_max、MV_max-分别为截取窗在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移的最大值和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移的最大值,min(.)-表示取括号内所有数值的最小值;In the formula, delta_max, delta_min- are the maximum value and minimum value of delta respectively, MH_max, MV_max- are the maximum value and The maximum value of the vertical relative displacement of the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array, min(.)-indicates the minimum value of all the values in the brackets;

MH_max和MV_max的表达式为:The expressions of MH_max and MV_max are:

MH_max=min(X-IR,IL)/(M-1)MH_max=min(X-IR,IL)/(M-1)

MV_max=min(Y-IB,IT)/(N-1)MV_max=min(Y-IB,IT)/(N-1)

式中,IR、IB-分别为拍摄对象的右侧和下侧边界在DVI1,1中的位置,IL、IT-分别为拍摄对象的左侧和上侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置,M、N-分别为离散视点图像阵列中每行和每列所包含的离散视点图像的数量;In the formula, IR, IB- are the positions of the right and lower borders of the subject in DVI 1,1 respectively, IL and IT- are the left and upper borders of the subject respectively in the last row of the discrete viewpoint image array , the position in the last column of discrete viewpoint images, M, N-respectively the number of discrete viewpoint images contained in each row and each column in the discrete viewpoint image array;

4.计算截取窗的大小:4. Calculate the size of the interception window:

截取窗的大小的表达式为:The expression for the size of the interception window is:

W=IR+(M-1)×MH-ILW=IR+(M-1)×MH-IL

H=IB+(N-1)×MV-ITH=IB+(N-1)×MV-IT

式中,W、H-分别为截取窗的宽度和高度;In the formula, W and H-respectively represent the width and height of the interception window;

5.计算截取窗的右下角在离散视点图像阵列的第一行、第一列离散视点图像中的位置:5. Calculate the position of the lower right corner of the interception window in the discrete viewpoint image in the first row and first column of the discrete viewpoint image array:

截取窗的右下角在DVI1,1中位置的表达式为:The expression of the position of the lower right corner of the interception window in DVI 1,1 is:

PH=IRPH=IR

PV=IBPV=IB

式中,PH、PV-分别为截取窗的右下角在DVI1,1中的横向和纵向位置坐标;In the formula, PH and PV- are respectively the horizontal and vertical position coordinates of the lower right corner of the interception window in DVI 1,1 ;

6.截取离散视点图像阵列生成子图像阵列:6. Intercept the discrete viewpoint image array to generate a sub-image array:

用截取窗对离散视点图像阵列中的每幅离散视点图像进行截取生成子图像阵列,子图像阵列中包含的子图像数量与离散视点图像阵列中包含的离散视点图像数量相等,位于子图像阵列中第i列、第j行的一幅子图像的表达式为:Use the interception window to intercept each discrete viewpoint image in the discrete viewpoint image array to generate a subimage array, the number of subimages contained in the subimage array is equal to the number of discrete viewpoint images contained in the discrete viewpoint image array, and they are located in the subimage array The expression of a sub-image in column i and row j is:

SIi,j(u,v)=DVIi,j(PH+(i-1)×MH-W+u,PV+(j-1)×MV-H+v)SI i,j (u,v)=DVI i,j (PH+(i-1)×MH-W+u, PV+(j-1)×MV-H+v)

式中,SIi,j-为子图像阵列中第i列、第j行的一幅子图像,u=1,2,…,W和v=1,2,…,H-分别为像素点在子图像中的横向和纵向位置坐标;In the formula, SI i, j - is a sub-image in the i-th column and j-th row in the sub-image array, u=1, 2,..., W and v=1, 2,..., H-respectively pixel points Horizontal and vertical position coordinates in the sub-image;

7.将子图像阵列转化成立体元图像阵列:7. Convert the array of sub-images into an array of volume element images:

子图像阵列转换成的立体元图像阵列中包含W×H个立体元图像,每个立体元图像的大小为M像素×N像素,位于立体元图像阵列中第p列、第q行的一幅立体元图像的表达式为:The stereoscopic element image array converted from the sub-image array contains W×H stereoscopic element images, the size of each stereo element image is M pixels×N pixels, and it is located in the p-th column and q-th row of the stereo element image array. The expression of the stereo element image is:

EIp,q(r,t)=SIr,t(p,q)EIp,q(r,t)=SIr,t(p,q)

式中,EIp,q-为立体元图像阵列中第p列、第q行的一幅立体元图像,r=1,2,…,M和t=1,2,…,N-分别为像素点在立体元图像中的横向和纵向位置坐标。In the formula, EI p, q - is a stereo element image of the pth row and the qth row in the stereo element image array, r=1, 2,..., M and t=1, 2,..., N-respectively The horizontal and vertical position coordinates of pixels in the stereo image.

本发明针对组合立体成像系统中立体元图像阵列的采集原理,利用单一相机与立体摄影轨道相结合的采集方法获得离散视点图像阵列。针对离散视点图像阵列与子图像阵列间的成像关系以及子图像阵列与立体元图像阵列之间的映射关系,本发明提出了一种对离散视点图像阵列进行窗截取的立体元图像阵列的生成方法,既可达到生成高分辨率的立体元图像阵列的目的,同时,又无需昂贵的采集设备和繁重的工作量。The invention aims at the acquisition principle of the stereo element image array in the combined stereo imaging system, and obtains the discrete viewpoint image array by using the acquisition method combining a single camera and a stereo photography track. Aiming at the imaging relationship between the discrete viewpoint image array and the sub-image array and the mapping relationship between the sub-image array and the stereo element image array, the present invention proposes a method for generating a stereo element image array that performs window interception on the discrete viewpoint image array , which can achieve the purpose of generating a high-resolution stereo element image array, and at the same time, does not require expensive acquisition equipment and heavy workload.

本发明能够生成较大拍摄对象的立体元图像阵列,生成的立体元图像阵列在显示过程中具有连续的观看视角,能够真实再现拍摄对象的结构信息。与透镜阵列采集法相比,本发明不受采集设备的限制,可以生成高分辨率的立体元图像阵列。与相机阵列采集法相比,本发明只使用一台相机,大大降低了拍摄成本和调节的工作量。The invention can generate a stereoscopic primary image array of a relatively large shooting object, and the generated stereoscopic primary image array has a continuous viewing angle during a display process, and can truly reproduce the structural information of the shooting object. Compared with the lens array acquisition method, the present invention is not limited by the acquisition equipment, and can generate a high-resolution stereo elemental image array. Compared with the camera array acquisition method, the invention only uses one camera, which greatly reduces the shooting cost and adjustment workload.

附图说明Description of drawings

图1为组合立体成像系统的立体元图像阵列采集过程示意图Figure 1 is a schematic diagram of the stereo element image array acquisition process of the combined stereo imaging system

图2为组合立体成像系统的立体元图像阵列显示过程示意图Figure 2 is a schematic diagram of the stereo element image array display process of the combined stereo imaging system

其中:1.实际物体2.光线3.透镜阵列4.记录媒体5.立体元图像6.立体元图像阵列7.照明8.平板显示器9.立体图像Of which: 1. Actual object 2. Light 3. Lens array 4. Recording medium 5. Stereoscopic element image 6. Stereoscopic element image array 7. Lighting 8. Flat panel display 9. Stereoscopic image

图3为离散视点采集结合窗截取算法的立体元图像阵列生成方法的流程图Fig. 3 is a flow chart of the stereo element image array generation method combined with discrete viewpoint acquisition and window interception algorithm

图4为离散视点图像阵列采集平台示意图Figure 4 is a schematic diagram of the discrete viewpoint image array acquisition platform

图5为离散视点图像阵列示意图Figure 5 is a schematic diagram of a discrete viewpoint image array

图6为放大后的离散视点图像阵列的第1、12和24行中第1、12和24列的九幅离散视点图像Fig. 6 is the nine discrete viewpoint images of columns 1, 12 and 24 in rows 1, 12 and 24 of the enlarged discrete viewpoint image array

图7为拍摄对象的上、下、左和右侧边界在DVI1,1中的位置的计算方法流程图Fig. 7 is a flow chart of the calculation method of the position of the upper, lower, left and right boundaries of the object in DVI 1,1

图8为窗截取算法生成子图像阵列的示意图Figure 8 is a schematic diagram of the sub-image array generated by the window truncation algorithm

具体实施方式Detailed ways

以下结合附图实例对本发明作进一步详细描述。离散视点采集结合窗截取算法的立体元图像阵列生成方法的具体过程(如图3所示)包括下列步骤:The present invention will be further described in detail below in conjunction with the accompanying drawings. The specific process (as shown in Figure 3) of the stereo element image array generation method (as shown in Figure 3) comprising the following steps:

1.采集离散视点图像阵列1. Acquisition of discrete viewpoint image arrays

如图4所示为离散视点图像阵列的采集平台,拍摄对象为两辆玩具卡车,其中,前面的卡车较后面的卡车更靠近拍摄相机,并且前面的卡车对后面的卡车产生了遮挡,离散视点图像阵列的采集过程包括下列步骤:As shown in Figure 4, the acquisition platform of the discrete viewpoint image array is taken. The objects to be photographed are two toy trucks. The front truck is closer to the camera than the rear truck, and the front truck blocks the rear truck. The discrete viewpoint The acquisition process of the image array includes the following steps:

第一步:初始化。调节支撑立体摄影轨道的两个三角支架将立体摄影轨道的高度固定到一个较低的位置,将相机安装在立体摄影轨道上,立体摄影轨道可以使相机从右至左匀速移动,实验人员通过遥控器控制立体摄影轨道的启动和停止。The first step: initialization. Adjust the two tripod brackets supporting the stereo photography track to fix the height of the stereo photography track to a lower position, install the camera on the stereo photography track, the stereo photography track can make the camera move from right to left at a constant speed, and the experimenter can control it by remote control The controller controls the start and stop of the stereographic track.

第二步:获取离散视点图像组:利用水平尺将立体摄影轨道的平面调整到与水平面平行,并用遥控器将相机移动到立体摄影轨道的最右端,按住相机快门线并同时启动立体摄影轨道,在相机的移动过程中,利用相机的连拍功能连续采集拍摄对象的多张离散视点图像,将采集到的离散视点图像按照拍摄的先后顺序从左到右排成一行,即得到立体摄影轨道位于该高度时采集到的离散视点图像组;Step 2: Acquire discrete viewpoint image groups: Use the level ruler to adjust the plane of the stereographic track to be parallel to the horizontal plane, and use the remote control to move the camera to the rightmost end of the stereographic track, press and hold the camera shutter line and start the stereographic track at the same time , during the movement of the camera, use the continuous shooting function of the camera to continuously collect multiple discrete viewpoint images of the subject, and arrange the collected discrete viewpoint images in a row from left to right according to the sequence of shooting, that is, the stereoscopic photography track A set of discrete viewpoint images collected at that altitude;

第三步:利用垂直标尺,上调支撑立体摄影轨道的两个三角支架,使立体摄影轨道上升一个固定的距离。Step 3: Utilize the vertical ruler to raise the two tripods supporting the stereoscopic photography track to raise the stereoscopic photography track by a fixed distance.

第四步:重复第二步和第三步的拍摄过程,获得多个离散视点图像组。Step 4: Repeat the shooting process of Step 2 and Step 3 to obtain multiple discrete viewpoint image groups.

第五步:将所有离散视点图像组按照采集的先后顺序,从上至下排列成一个离散视点图像阵列,即,将最先采集到的一组离散视点图像组放在离散视点图像阵列的第一行,最后采集到的一组离散视点图像组放在离散视点图像阵列的最后一行。生成的离散视点图像阵列如图5所示,为了观察方便,图6中分别放大了离散视点图像阵列的第1、12和24行中第1、12和24列的九幅离散视点图像,从图6中可以看出,随着观看视角的移动,离散视点图像阵列呈现出了红色卡车从可见到不可见的变化过程。Step 5: Arrange all discrete viewpoint image groups from top to bottom according to the order of collection, that is, put the first group of discrete viewpoint image groups collected first in the discrete viewpoint image array. One row, the last collected set of discrete viewpoint image groups is placed in the last row of the discrete viewpoint image array. The generated discrete viewpoint image array is shown in Figure 5. For the convenience of observation, the nine discrete viewpoint images in the 1st, 12th and 24th rows of the discrete viewpoint image array are enlarged respectively in Figure 6. It can be seen from Fig. 6 that as the viewing angle moves, the discrete viewpoint image array presents a change process of the red truck from visible to invisible.

2.计算拍摄对象在每幅离散视点图像中的位置2. Calculate the position of the subject in each discrete view image

拍摄对象在每幅离散视点图像中的位置参数包括:拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移,拍摄对象在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,拍摄对象的上、下、左和右侧边界在离散视点图像阵列的第一行、第一列离散视点图像中的位置,拍摄对象的上、下、左和右侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置。The position parameters of the object in each discrete viewpoint image include: the horizontal relative displacement of the object in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array, The vertical relative displacement in the adjacent discrete viewpoint image, the position of the upper, lower, left and right boundaries of the object in the first row and the first column of the discrete viewpoint image array, the upper, lower, left and right positions of the object and the position of the right boundary in the last row, last column of discrete view images of the discrete view image array.

设DVIi,j表示离散视点图像阵列中位于第i列,第j行的一幅离散视点图像。拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移的计算方法为:首先,将DVI1,1逐个像素地向右平移,每次平移后,计算平移后的DVI1,1与DVI2,1重叠部分的峰值信噪比,峰值信噪比定义为:Let DVI i,j denote a discrete viewpoint image located in column i and row j in the discrete viewpoint image array. The calculation method of the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array is as follows: firstly, the DVI 1,1 is shifted to the right pixel by pixel, and after each shift, calculate the shifted The peak signal-to-noise ratio of the overlapping part of DVI 1,1 and DVI 2,1 , the peak signal-to-noise ratio is defined as:

PSNRPSNR (( sthe s )) == 1010 ×× loglog 1010 [[ 255255 22 MSEMSE (( sthe s )) ]]

式中,s-为DVI1,1的平移距离,PSNR(s)-为平移后的DVI1,1-与DVI2,1重叠部分的峰值信噪比,MSE(s)-为均方误差,其定义式为:In the formula, s-is the translation distance of DVI 1,1 , PSNR(s)-is the peak signal-to-noise ratio of the overlapping part of DVI 1,1 -and DVI 2,1 after translation, MSE(s)-is the mean square error , whose definition is:

MSEMSE (( sthe s )) == 11 (( Xx -- sthe s )) ×× YY ΣΣ xx == 00 Xx -- 11 -- sthe s ΣΣ ythe y == 00 YY -- 11 [[ DVIDVI 1,11,1 (( xx ,, ythe y )) -- DVIDVI 2,12,1 (( xx ++ sthe s ,, ythe y )) ]] 22

式中,x、y-分别为像素点在DVI1,1中的横向和纵向位置坐标,X、Y-分别为离散视点图像的水平和垂直分辨率。In the formula, x, y-are the horizontal and vertical position coordinates of pixels in DVI 1, 1, respectively, and X, Y-are the horizontal and vertical resolutions of the discrete viewpoint image, respectively.

然后,将峰值信噪比最大时对应的移动距离作为拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移。Then, the moving distance corresponding to the peak signal-to-noise ratio is taken as the horizontal relative displacement of the photographed object in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array.

拍摄对象在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移的计算方法为:首先将DVI1,1和DVI1,2进行转置,然后按照与拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移相同的计算方法进行计算。The calculation method of the vertical relative displacement of the subject in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array is: first transpose DVI 1, 1 and DVI 1, 2 , and then follow the The horizontal relative displacement in any two horizontally adjacent discrete viewpoint images in the image array is calculated by the same calculation method.

拍摄对象的上、下、左和右侧边界在DVI1,1中的位置的计算方法为(如图7所示):首先,计算DVI1,1和DVI2,1的差值图像并进行中值滤波;然后,将中值滤波后的差值图像中所有非零点的上、下和左边界的位置作为拍摄对象的上、下和左侧边界在DVI1,1中的位置,将中值滤波后的差值图像中所有非零点的右边界的位置减去拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移后得到的值作为拍摄对象的右侧边界在DVI1,1中的位置。The calculation method of the position of the upper, lower, left and right boundaries of the subject in DVI 1,1 is (as shown in Figure 7): first, calculate the difference image of DVI 1,1 and DVI 2,1 and perform Median filtering; then, with the position of the upper, lower and left borders of all non-zero points in the difference image after median filtering as the position of the upper, lower and left borders of the subject in DVI 1, 1 , the middle The value obtained after subtracting the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array from the position of the right boundary of all non-zero points in the value-filtered difference image is taken as the right side of the subject The position of the border in DVI 1,1 .

拍摄对象的上、下、左和右侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中位置的计算方法为:首先,计算离散视点图像阵列中最后一行、最后一列的离散视点图像和离散视点图像阵列中最后一行、倒数第二列的离散视点图像的差值图像并进行中值滤波;然后,将中值滤波后的差值图像中所有非零点的上、下和右边界的位置作为拍摄对象的上、下和右侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置,将中值滤波后的差值图像中所有非零点的左边界的位置加上拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移后得到的值作为拍摄对象的左侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置。The calculation method for the position of the upper, lower, left and right sides of the object in the last row and last column of discrete viewpoint images in the discrete viewpoint image array is as follows: first, calculate the last row and last column of discrete viewpoint images in the discrete viewpoint image array and the difference image of the discrete viewpoint image in the last row and the penultimate column in the discrete viewpoint image array and perform median filtering; then, the upper, lower and right boundaries of all non-zero points in the median filtered difference image The position is taken as the position of the upper, lower and right boundaries of the shooting object in the last row and last column of discrete viewpoint images of the discrete viewpoint image array, and the positions of the left borders of all non-zero points in the median-filtered difference image are added to The value obtained after the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array is taken as the position of the left boundary of the subject in the last row and last column of discrete viewpoint images of the discrete viewpoint image array .

3.计算截取窗在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,其表达式为:3. Calculate the horizontal relative displacement of the interception window in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the vertical relative displacement of the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array, where The expression is:

MH=DH+deltaMH=DH+delta

MV=DV+deltaMV=DV+delta

式中,MH、MV-分别为截取窗在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,DH、DV-分别为拍摄对象在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移和拍摄对象在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移,delta-为深度影响因子。In the formula, MH and MV- are the horizontal relative displacement of the interception window in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array, respectively. , DH, DV- are the horizontal relative displacement of the subject in any two horizontally adjacent discrete viewpoint images in the discrete viewpoint image array and the horizontal relative displacement of the subject in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array The vertical relative displacement in the image, delta- is the depth factor.

根据实际需要,delta可以在允许的范围内进行任意取值,delta的取值范围为:According to actual needs, delta can take any value within the allowable range, and the value range of delta is:

delta_max=min[(MH_max-DH),(MV_max-DV)]delta_max=min[(MH_max-DH), (MV_max-DV)]

delta_min=0delta_min=0

式中,delta_max、delta_min-分别为delta可取的最大值和最小值,MH_max、MV_max-分别为截取窗在离散视点图像阵列中任意两幅水平相邻离散视点图像中的水平相对位移的最大值和截取窗在离散视点图像阵列中任意两幅垂直相邻离散视点图像中的垂直相对位移的最大值,min(·)-表示取括号内所有数值的最小值。In the formula, delta_max, delta_min- are the maximum value and minimum value of delta respectively, MH_max, MV_max- are the maximum value and The maximum value of the vertical relative displacement of the interception window in any two vertically adjacent discrete viewpoint images in the discrete viewpoint image array, min(·)-means taking the minimum value of all the values in the brackets.

MH_max和MV_max的表达式为:The expressions of MH_max and MV_max are:

MH_max=min(X-IR,IL)/(M-1)MH_max=min(X-IR,IL)/(M-1)

MV_max=min(Y-IB,IT)/(N-1)MV_max=min(Y-IB,IT)/(N-1)

式中,IR、IB-分别为拍摄对象的右侧和下侧边界在DVI1,1中的位置,IL、IT-分别为拍摄对象的左侧和上侧边界在离散视点图像阵列的最后一行、最后一列离散视点图像中的位置,M、N-分别为离散视点图像阵列中每行和每列所包含的离散视点图像的数量。In the formula, IR, IB- are the positions of the right and lower borders of the subject in DVI 1,1 respectively, IL and IT- are the left and upper borders of the subject respectively in the last row of the discrete viewpoint image array , the position in the last column of discrete viewpoint images, M, N—respectively the number of discrete viewpoint images contained in each row and column of the discrete viewpoint image array.

4.计算截取窗的大小4. Calculate the size of the interception window

截取窗的大小的表达式为:The expression for the size of the interception window is:

W=IR+(M-1)×MH-ILW=IR+(M-1)×MH-IL

H=IB+(N-1)×MV-ITH=IB+(N-1)×MV-IT

式中,W、H-分别为截取窗的宽度和高度。In the formula, W and H- are the width and height of the interception window, respectively.

5.计算截取窗的右下角在离散视点图像阵列的第一行、第一列离散视点图像中的位置5. Calculate the position of the lower right corner of the interception window in the discrete viewpoint image in the first row and first column of the discrete viewpoint image array

截取窗的右下角在DVI1,1中位置的表达式为:The expression of the position of the lower right corner of the interception window in DVI 1,1 is:

PH=IRPH=IR

PV=IBPV=IB

式中,PH、PV-分别为截取窗的右下角在DVI1,1中的横向和纵向位置坐标。In the formula, PH and PV- are the horizontal and vertical position coordinates of the lower right corner of the interception window in DVI 1,1 , respectively.

6.截取离散视点图像阵列生成子图像阵列6. Intercept discrete viewpoint image array to generate sub-image array

如图8所示,为生成子图像阵列的示意图,图中白色框代表截取窗,用截取窗对离散视点图像阵列中的每幅离散视点图像进行截取生成子图像阵列,子图像阵列中包含的子图像数量与离散视点图像阵列中包含的离散视点图像数量相等,位于子图像阵列中第i列、第j行的一幅子图像的表达式为:As shown in Figure 8, it is a schematic diagram of generating a sub-image array. The white box in the figure represents the interception window, and the interception window is used to intercept each discrete viewpoint image in the discrete viewpoint image array to generate a sub-image array. The sub-image array contains The number of sub-images is equal to the number of discrete viewpoint images contained in the discrete viewpoint image array, and the expression of a sub-image located in the i-th column and j-th row in the sub-image array is:

SIi,j(u,v)=DVIi,j(PH+(i-1)×MH-W+u,PV+(j-1)×MV-H+v)SI i,j (u,v)=DVI i,j (PH+(i-1)×MH-W+u, PV+(j-1)×MV-H+v)

式中,SIi,j-为子图像阵列中第i列、第j行的一幅子图像,u=1,2,…,W和v=1,2,…,H-分别为像素点在子图像中的横向和纵向位置坐标。In the formula, SI i, j - is a sub-image in the i-th column and j-th row in the sub-image array, u=1, 2,..., W and v=1, 2,..., H-respectively pixel points Horizontal and vertical position coordinates within the subimage.

7.将子图像阵列转化成立体元图像阵列7. Convert the sub-image array into a volume element image array

子图像阵列转换成的立体元图像阵列中包含W×H个立体元图像,每个立体元图像的分辨率为M×N,位于立体元图像阵列中第p列、第q行的一幅立体元图像的表达式为:The stereoscopic element image array converted from the sub-image array contains W×H stereoscopic element images, each of which has a resolution of M×N, and is located in the p-th column and q-th row of a stereoscopic element image array. The expression for the meta image is:

EIp,q(r,t)=SIr,t(p,q)EI p, q (r, t) = SI r, t (p, q)

式中,EIp,q-为立体元图像阵列中第p列、第q行的一幅立体元图像,r=1,2,…,M和t=1,2,…,N-分别为像素点在立体元图像中的横向和纵向位置坐标。In the formula, EI p, q - is a stereo element image in column p and row q in the stereo element image array, r=1, 2,..., M and t=1, 2,..., N-respectively The horizontal and vertical position coordinates of pixels in the stereo image.

Claims (1)

1. a discrete view collection is in conjunction with the three-dimensional element image array generation method of window intercepting algorithm, it is characterized in that comprising the following steps:
1.1 gather discrete view pattern matrix, comprise the following steps:
1.1.1 initialization: regulate two A-frames supporting stereo track the height of stereo track to be fixed to a lower position, camera is arranged on stereo track, stereo track can make camera at the uniform velocity move from right to left, and experimenter controls startup and the stopping of stereo track by remote controller;
1.1.2 discrete view image sets is obtained: utilize level gauge the plane of stereo track to be adjusted to and plane-parallel, and with remote controller, camera is moved to the low order end of stereo track, pin camera shutter release cable and start stereo track simultaneously, in the moving process of camera, utilize multiple discrete view images of the continuous shooting function continuous acquisition reference object of camera, the sequencing of the discrete view image collected according to shooting is from left to right in line, namely obtains the discrete view image sets collected when stereo track is positioned at this height;
1.1.3 utilize Vertical surveyors' staff, raise two A-frames supporting stereo track, make stereo track increase a fixing distance;
1.1.4 repeat the shooting process of step 1.1.2 and 1.1.3, obtain multiple discrete view image sets;
1.1.5 by the sequencing of all discrete view image sets according to collection, be arranged in a discrete view pattern matrix from top to bottom, namely, collect at first one group of discrete view image sets is placed on the first row of discrete view pattern matrix, the one group of discrete view image sets finally collected is placed on last column of discrete view pattern matrix;
1.2 calculate the position of reference object in every width discrete view image:
The location parameter of reference object in every width discrete view image comprises: the horizontal relative displacement of reference object in discrete view pattern matrix in the horizontal adjacent discrete visual point image of any two width, the vertically opposite displacement of reference object in discrete view pattern matrix in any two width vertical adjacent discrete visual point image, reference object upper, under, left and right lateral boundaries is in the first row of discrete view pattern matrix, position in first row discrete view image, reference object upper, under, left and right lateral boundaries is in last column of discrete view pattern matrix, position in last row discrete view image,
1.2.1 determine the horizontal relative displacement of reference object in discrete view pattern matrix in the horizontal adjacent discrete visual point image of any two width, its computational methods are: establish DVI i, jrepresent in discrete view pattern matrix and be positioned at the i-th row, a width discrete view image of jth row, first, by DVI 1,1pixel by pixel to right translation, after each translation, calculate the DVI after translation 1,1with DVI 2,1the Y-PSNR of lap, Y-PSNR is defined as:
PSNR ( s ) = 10 × log 10 [ 255 2 MSE ( s ) ]
In formula, s-is DVI 1,1translation distance, PSNR (s)-be the DVI after translation 1,1with DVI 2,1the Y-PSNR of lap, MSE (s)-be mean square error, its definition is:
MSE ( s ) = 1 ( X - s ) × Y Σ x = 0 X - 1 - s Σ y = 0 Y - 1 [ DVI 1,1 ( x , y ) - DVI 2,1 ( x + s , y ) ] 2
In formula, x, y-are respectively pixel at DVI 1,1in horizontal and vertical position coordinates, the pixel quantity that the horizontal and vertical direction that X, Y-are respectively discrete view image comprises;
Then, displacement corresponding time Y-PSNR is maximum is as the horizontal relative displacement of reference object in discrete view pattern matrix in the horizontal adjacent discrete visual point image of any two width;
1.2.2 determine the vertically opposite displacement of reference object in discrete view pattern matrix in any two width vertical adjacent discrete visual point image, its computational methods are: first by DVI 1,1and DVI 1,2carry out transposition, then calculate according to the computational methods identical with the horizontal relative displacement of reference object in discrete view pattern matrix in any two width horizontal adjacent discrete visual point image;
1.2.3 determine that upper and lower, the left and right lateral boundaries of reference object is at DVI 1,1in position, its computational methods are: first, calculate DVI 1,1and DVI 2,1error image and carry out medium filtering; Then, using the position of the upper and lower and left margin of non-zero points all in the error image after medium filtering as the upper and lower of reference object and left border at DVI 1,1in position, the value obtained after the position of the right margin of non-zero points all in the error image after medium filtering being deducted the horizontal relative displacement of reference object in discrete view pattern matrix in the horizontal adjacent discrete visual point image of any two width as the right side boundary of reference object at DVI 1,1in position;
1.2.4 upper and lower, the position of left and right lateral boundaries in last column, last row discrete view image of discrete view pattern matrix of reference object is determined, its computational methods are: first, calculate the error image of the discrete view image of last column, row second from the bottom in last column in discrete view pattern matrix, last discrete view image arranged and discrete view pattern matrix and carry out medium filtering, then, upper by non-zero points all in the error image after medium filtering, upper as reference object of the position of lower and right margin, lower and right side boundary is in last column of discrete view pattern matrix, position in last row discrete view image, the value obtained after the position of the left margin of non-zero points all in the error image after medium filtering being added the horizontal relative displacement of reference object in discrete view pattern matrix in the horizontal adjacent discrete visual point image of any two width as the left border of reference object in last column of discrete view pattern matrix, position in last row discrete view image,
1.3 calculate and intercept the horizontal relative displacement of windows in discrete view pattern matrix in any two width horizontal adjacent discrete visual point image and intercept the vertically opposite displacement of window in discrete view pattern matrix in any two width vertical adjacent discrete visual point image:
Intercepting the horizontal relative displacement of window in discrete view pattern matrix in any two width horizontal adjacent discrete visual point image with the expression formula intercepting the vertically opposite displacement of window in discrete view pattern matrix in any two width vertical adjacent discrete visual point image is:
MH=DH+delta
MV=DV+delta
In formula, MH, MV-are respectively and intercept the horizontal relative displacement of window in discrete view pattern matrix in any two width horizontal adjacent discrete visual point image and the vertically opposite displacement of intercepting window in discrete view pattern matrix in any two width vertical adjacent discrete visual point image, DH, DV-are respectively the horizontal relative displacement of reference object in discrete view pattern matrix in any two width horizontal adjacent discrete visual point image and the vertically opposite displacement of reference object in discrete view pattern matrix in any two width vertical adjacent discrete visual point image, and delta-is the effect of depth factor;
According to actual needs, delta can carry out any value in allowed limits, and the span of delta is:
delta_max=min[(MH_max-DH),(MV_max-DV)]
delta_min=0
In formula, delta_max, delta_min-are respectively the desirable maximum of delta and minimum value, MH_max, MV_max-are respectively the maximum intercepting the horizontal relative displacement of window in discrete view pattern matrix in any two width horizontal adjacent discrete visual point image and the maximum intercepting the vertically opposite displacement of window in discrete view pattern matrix in any two width vertical adjacent discrete visual point image, and the minimum value of all numerical value in bracket is got in min ()-expression;
The expression formula of MH_max and MV_max is:
MH_max=min(X-IR,IL)/(M-1)
MV_max=min(Y-IB,IT)/(N-1)
In formula, IR, IB-are respectively the right side of reference object and border, downside at DVI 1,1in position, IL, IT-are respectively left side and the position of boundary in last column, last row discrete view image of discrete view pattern matrix of reference object, and M, N-are respectively in discrete view pattern matrix the quantity often going and often arrange the discrete view image comprised;
1.4 calculate the size intercepting window:
The expression formula intercepting the size of window is:
W=IR+(M-1)×MH-IL
H=IB+(N-1)×MV-IT
In formula, W, H-are respectively the width and height that intercept window;
1.5 calculate the position of the lower right corner in the first row, first row discrete view image of discrete view pattern matrix intercepting window:
Intercept the lower right corner of window at DVI 1,1the expression formula of middle position is:
PH=IR
PV=IB
In formula, PH, PV-are respectively the lower right corner of intercepting window at DVI 1,1in horizontal and vertical position coordinates;
1.6 intercept discrete view pattern matrix spanning subgraph as array:
With intercepting window, intercepting spanning subgraph is carried out as array to the every width discrete view image in discrete view pattern matrix, the subgraph quantity comprised in subgraph array is equal with the discrete view amount of images comprised in discrete view pattern matrix, is arranged in that subgraph array i-th arranges, the expression formula of a width subgraph of jth row is:
SI i,j(u,V)=DVI i,j(PH+(i-1)×MH-W+u,PV+(j-1)×MV-H+V)
In formula, SI i,j-in subgraph array i-th row, jth row a width subgraph, u=1,2 ..., W and v=1,2 ..., H-is respectively the horizontal and vertical position coordinates of pixel in subgraph;
1.7 by array switching for subgraph one-tenth three-dimensional element pattern matrix:
Comprise W × H three-dimensional element image in the three-dimensional element image array that subgraph array converts to, the size of each three-dimensional element image is M pixel × N pixel, and the expression formula being arranged in the width three-dimensional element image that three-dimensional element image array p arranges, q is capable is:
EI p,q(r,t)=SI r,t(p,q)
In formula, EI p, q-be the width three-dimensional element image that p in three-dimensional element image array arranges, q is capable, r=1,2 ..., M and t=1,2 ..., N-is respectively the horizontal and vertical position coordinates of pixel in three-dimensional element image.
CN201310051957.4A 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm Expired - Fee Related CN103096113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310051957.4A CN103096113B (en) 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310051957.4A CN103096113B (en) 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm

Publications (2)

Publication Number Publication Date
CN103096113A CN103096113A (en) 2013-05-08
CN103096113B true CN103096113B (en) 2015-01-07

Family

ID=48208165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310051957.4A Expired - Fee Related CN103096113B (en) 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm

Country Status (1)

Country Link
CN (1) CN103096113B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108495111B (en) * 2018-04-11 2019-12-13 吉林大学 Stereo element image array coding method based on imaging geometric characteristics

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102065313A (en) * 2010-11-16 2011-05-18 上海大学 Uncalibrated multi-viewpoint image correction method for parallel camera array
CN102223556A (en) * 2011-06-13 2011-10-19 天津大学 Multi-view stereoscopic image parallax free correction method
CN102447934A (en) * 2011-11-02 2012-05-09 吉林大学 Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens
CN102523462A (en) * 2011-12-06 2012-06-27 南开大学 Method and device for rapidly acquiring elemental image array based on camera array

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0329312D0 (en) * 2003-12-18 2004-01-21 Univ Durham Mapping perceived depth to regions of interest in stereoscopic images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102065313A (en) * 2010-11-16 2011-05-18 上海大学 Uncalibrated multi-viewpoint image correction method for parallel camera array
CN102223556A (en) * 2011-06-13 2011-10-19 天津大学 Multi-view stereoscopic image parallax free correction method
CN102447934A (en) * 2011-11-02 2012-05-09 吉林大学 Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens
CN102523462A (en) * 2011-12-06 2012-06-27 南开大学 Method and device for rapidly acquiring elemental image array based on camera array

Also Published As

Publication number Publication date
CN103096113A (en) 2013-05-08

Similar Documents

Publication Publication Date Title
KR102699454B1 (en) Three dimensional glasses free light field display using eye location
CN102164298B (en) Stereo Matching-based Element Image Acquisition Method in Panoramic Imaging System
US6556236B1 (en) Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments
CN102692805B (en) Multilayer liquid crystal-based projection type three-dimensional display device and method
CN102098524B (en) Tracking type stereo display device and method
DE102013113542B4 (en) Multi-view autostereoscopic display and method for controlling optimal viewing distances thereof
CN100511124C (en) Free multi visul point polyprojecting 3D displaying system and method
JP4928476B2 (en) Stereoscopic image generating apparatus, method thereof and program thereof
CN105282513A (en) Device and method for detecting operation state of ultra-high-voltage transformer in transformer substation based on 3D infrared panoramic image
CN103019021B (en) The processing method of a kind of 3D light field camera and photographic images thereof
CN205901977U (en) Interactive display system of bore hole 3D augmented reality
CN102209254A (en) One-dimensional integrated imaging method and device
CN202217121U (en) Support module for 3D imaging system
CN109816731A (en) A kind of method of RGB and depth information accuracy registration
CN106507096B (en) A kind of tracing type ground light field 3D display method and system with super large visual angle
CN206563985U (en) 3-D imaging system
CN101527865A (en) Projection type high-resolution multi-view auto-stereo display system
CN106056622B (en) Multi-view depth video restoration method based on Kinect camera
CN101729920A (en) Method for displaying stereoscopic video with free visual angles
CN104007556A (en) Low crosstalk integrated imaging three-dimensional display method based on microlens array group
Xing et al. Optical arbitrary-depth refocusing for large-depth scene in integral imaging display based on reprojected parallax image
CN117176936B (en) A freely expandable stereoscopic digital sandbox system and light field rendering method
CN115733967A (en) A human eye tracking method for naked-eye 3D display
Shin et al. Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets
CN102376207B (en) LED three-dimensional display screen and manufacturing method thereof, display system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150107