CN105025287A - A Method for Constructing a Stereoscopic Panorama of a Scene Using Rotated Video Sequence Images - Google Patents
A Method for Constructing a Stereoscopic Panorama of a Scene Using Rotated Video Sequence Images Download PDFInfo
- Publication number
- CN105025287A CN105025287A CN201510373958.XA CN201510373958A CN105025287A CN 105025287 A CN105025287 A CN 105025287A CN 201510373958 A CN201510373958 A CN 201510373958A CN 105025287 A CN105025287 A CN 105025287A
- Authority
- CN
- China
- Prior art keywords
- images
- image
- strip
- camera
- video sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000011159 matrix material Substances 0.000 claims description 21
- 230000009466 transformation Effects 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims 2
- 239000011521 glass Substances 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 abstract 1
- 238000004364 calculation method Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000001308 synthesis method Methods 0.000 description 2
- 229920000535 Tan II Polymers 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Landscapes
- Closed-Circuit Television Systems (AREA)
Abstract
Description
技术领域technical field
本发明涉及视频影像的拼接技术、计算机视觉、数字图像处理、地理信息三维可视化等领域,尤其涉及利用视频序列影像的全景立体图生成方法。The invention relates to the fields of splicing technology of video images, computer vision, digital image processing, three-dimensional visualization of geographic information, etc., and in particular to a method for generating panoramic stereograms using video sequence images.
背景技术Background technique
视频序列作为一种常见的大众媒体,本身含有非常丰富的信息,不仅获取方便且能更加形象、客观的表达地理空间,形成从“侧面看世界”的真实场景。以视频序列影像为数据基础,研究利用视频序列影像生成全景3D立体影像的方法,将大大地丰富GIS在数据获取和视频浏览及建模方面的能力,而全景3D立体影像本身即是一种建模表达方式,不仅直观、简单、方便,且能够为3DGIS提供一种极好的景观表达方法。As a common mass media, video sequence itself contains very rich information. It is not only easy to obtain, but also can express geographical space more vividly and objectively, forming a real scene of "seeing the world from the side". Based on video sequence images, research on the method of using video sequence images to generate panoramic 3D stereoscopic images will greatly enrich the capabilities of GIS in data acquisition, video browsing and modeling, and panoramic 3D stereoscopic images themselves are a kind of construction. The model expression method is not only intuitive, simple and convenient, but also can provide an excellent landscape expression method for 3DGIS.
一些学者针对普通视频序列拼接生成全景立体图像进行了相关方面的研究,Ishiguro等人采用等角度旋转拍摄方式将连续拍摄的图像序列拼接生成全景立体图像(Ishiguro H,Yamamoto M,Tsuji S.Omni-directional stereo[J].IEEE Transaction on Pattern Analysis andMachine Intelligence,1992,14(2):257-262)。Peleg和Ben-Ezra在Ishiguro方法的基础上利用圆投影方法,对全景立体的成像条件进行了简化,不需要严格进行等角度拍摄,初步实现了利用普通摄像机进行全景立体成像(Peleg S,Ben-Ezra M.Stereo panorama with a single camera[C].IEEE Conference of Computer Vision and Pattern Recognition,1999:395-401)。JungukCho等人使用单一相机旋转拍摄图像,计算重叠图像间的条带,将图像拼接形成全景立体图像(Junguk C,Joon-Hyuk C,Yongmin T,et al.Stereo Panoramic Image Stitching with a Single Camera[C].IEEEInternational Conference on Consumer Electronics,2013:256-257)。K ang等人采用图像分层方法对全景图像计算深度信息,并在此基础上生成全景立体图像(Kang S B.Layered DepthPanoramas[C].IEEE Conference on Computer Vision and Pattern Recognition,2007:1-8)。Some scholars have carried out relevant research on the generation of panoramic stereoscopic images by splicing common video sequences. Ishiguro et al. used the method of equiangular rotation to splice the continuous shooting image sequences to generate panoramic stereoscopic images (Ishiguro H, Yamamoto M, Tsuji S. Omni- directional stereo[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1992,14(2):257-262). Based on the Ishiguro method, Peleg and Ben-Ezra used the circular projection method to simplify the imaging conditions of the panoramic stereo, without strict equiangular shooting, and initially realized the panoramic stereo imaging with ordinary cameras (Peleg S, Ben-Ezra Ezra M. Stereo panorama with a single camera [C]. IEEE Conference of Computer Vision and Pattern Recognition, 1999:395-401). JungukCho et al. used a single camera to rotate and shoot images, calculated the bands between overlapping images, and stitched the images to form a panoramic stereo image (Junguk C, Joon-Hyuk C, Yongmin T, et al. Stereo Panoramic Image Stitching with a Single Camera[C ]. IEEE International Conference on Consumer Electronics, 2013: 256-257). K ang et al. used the image layering method to calculate the depth information of the panoramic image, and on this basis generate a panoramic stereo image (Kang S B.Layered DepthPanoramas[C].IEEE Conference on Computer Vision and Pattern Recognition,2007:1-8 ).
当前基于图像拼接式的全景立体生成方法对拍摄条件要求严格或对拍摄方式的限制较多,并不适用于使用普通数码摄像机对场景随意拍摄视频序列构建真三维立体全景图。The current panoramic stereo generation method based on image stitching has strict requirements on shooting conditions or more restrictions on shooting methods, and is not suitable for using ordinary digital cameras to randomly shoot video sequences of scenes to construct true three-dimensional stereo panoramas.
发明内容Contents of the invention
本发明以视频序列影像为基础,利用中心圆投影模型,采用自适应条带大小和位置计算方法将视频序列影像拼接生成立体全景图像。该方法充分利用视频序列图像间的冗余信息来生成真三维立体全景图像,可以有效避免使用多个摄像机拍摄而带来的不便,为普通大众提供一种便捷的生成立体全景图像的方法。Based on the video sequence images, the invention uses a central circle projection model and adopts an adaptive strip size and position calculation method to splice the video sequence images to generate a stereoscopic panoramic image. This method makes full use of the redundant information between video sequence images to generate a true three-dimensional panoramic image, which can effectively avoid the inconvenience caused by using multiple cameras for shooting, and provides a convenient method for generating stereoscopic panoramic images for the general public.
为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
利用旋转拍摄的视频序列影像构建场景立体全景图的方法,包括如下步骤:The method for constructing a scene stereoscopic panorama by using the video sequence images shot by rotation comprises the following steps:
1)对视频帧图像进行逐一读取,提取连续图像间特征匹配对,并估计摄像机的内外参数;1) Read the video frame images one by one, extract feature matching pairs between consecutive images, and estimate the internal and external parameters of the camera;
2)利用图像间的特征匹配点计算摄像机旋转的虚拟速度,并根据摄像机的旋转速度来确定条带的大小;2) Calculate the virtual speed of camera rotation by using the feature matching points between images, and determine the size of the strip according to the rotation speed of the camera;
3)根据中心圆投影原理对视频图像进行圆柱投影,利用视差估计值,计算图像条带的位置;3) Carry out cylindrical projection to the video image according to the central circle projection principle, and use the parallax estimation value to calculate the position of the image strip;
4)根据条带的大小与位置对视频帧图像进行提取,分别提取左条带图像与右条带图像,并将提取的所有左条带图像与右条带图像分别进行全景拼接生成具有视差的左全景图像和右全景图像。4) Extract the video frame images according to the size and position of the strips, extract the left strip images and the right strip images respectively, and perform panorama stitching on all the extracted left strip images and right strip images respectively to generate parallax images Left panorama image and right panorama image.
所述步骤1)的具体过程为:The concrete process of described step 1) is:
(1-1)逐一读取视频序列中连续帧图像;(1-1) Read continuous frame images in the video sequence one by one;
(1-2)采用ORB(oriented FAST and Rotated BRIEF)算法对视频帧图像进行特征匹配;(1-2) Use the ORB (oriented FAST and Rotated BRIEF) algorithm to perform feature matching on video frame images;
(1-3)由于噪声和移动目标存在,匹配集合中会存在误匹配点,根据极线原理,采用RANSAC(Random Sample Consensus,随机抽样一致性)方法剔除误匹配点;(1-3) Due to the presence of noise and moving targets, there will be mismatching points in the matching set. According to the epipolar principle, the RANSAC (Random Sample Consensus) method is used to eliminate the mismatching points;
(1-4)根据正确的匹配点集,计算图像间的单应矩阵变换,根据单应变换矩阵计算摄像机的内外参数矩阵,并利用光束平差原理对摄像机内外参数矩阵进行修正。(1-4) Calculate the homography matrix transformation between images according to the correct matching point set, calculate the internal and external parameter matrix of the camera according to the homography transformation matrix, and use the beam adjustment principle to correct the internal and external parameter matrix of the camera.
所述步骤2)的具体过程为:The concrete process of described step 2) is:
(2-1)根据步骤1)中提取的图像帧的匹配特征点,计算每对特征点的水平偏移量与垂直偏移量;(2-1) According to the matching feature point of the image frame extracted in step 1), calculate the horizontal offset and the vertical offset of every pair of feature points;
(2-2)对所有特征点对的水平偏移量求和取平均,计算摄像机水平旋转速度;(2-2) The horizontal offsets of all feature point pairs are summed and averaged, and the horizontal rotation speed of the camera is calculated;
(2-3)对所有特征点对的垂直偏移量求和取平均,计算摄像机垂直旋转速度;(2-3) The vertical offsets of all feature point pairs are summed and averaged, and the vertical rotation speed of the camera is calculated;
(2-4)在前三步的基础上,计算条带的大小。(2-4) On the basis of the first three steps, calculate the size of the stripe.
所述步骤3)具体过程为:Described step 3) specific process is:
(3-1)将视频序列图像投影到圆柱内表面;(3-1) Projecting the video sequence image onto the inner surface of the cylinder;
(3-2)在圆柱内部设置直径与人眼基距相近的视点圆,在视点圆上构建左、右两个虚拟摄像机,对投影到圆柱面的视频图像进行虚拟成像;(3-2) A viewpoint circle with a diameter close to the base distance of the human eye is set inside the cylinder, and two virtual cameras, left and right, are constructed on the viewpoint circle to perform virtual imaging on the video images projected onto the cylinder surface;
(3-3)依据虚拟成像按步骤2)确定的条带大小以及虚拟摄像机的旋转方位和视差估计值,按中心圆投影模型分别计算左、右虚拟相机拍摄条带对应的位置。(3-3) According to the strip size determined in step 2) of the virtual imaging and the rotation orientation and parallax estimation value of the virtual camera, the positions corresponding to the left and right virtual camera shooting strips are respectively calculated according to the central circle projection model.
所述步骤4)具体过程为:Described step 4) specific process is:
(4-1)依据计算的条带大小和位置,从圆柱投影视频图像上分别提取左、右虚拟摄影机条带图像;(4-1) extract the left and right virtual camera strip images respectively from the cylindrical projection video image according to the calculated strip size and position;
(4-2)对提取的左、右虚拟摄像机图像条带进行全景拼接,获得具有视差的左、右全景图像,构成立体全景图像对。(4-2) Perform panoramic splicing on the extracted left and right virtual camera image strips to obtain left and right panoramic images with parallax to form a stereoscopic panoramic image pair.
相比于现有技术,本发明所述方法具有如下特点:Compared with the prior art, the method of the present invention has the following characteristics:
1、本发明方法采用一个摄像机,在中心圆投影的基础上对视频序列图像进行自适应条带提取生成具有视差的左、右两幅全景图像,可以有效避免使用多个摄像机拍摄而带来的不便;1. The method of the present invention adopts a camera, and on the basis of central circle projection, adaptive strip extraction is performed on video sequence images to generate left and right panoramic images with parallax, which can effectively avoid using multiple cameras to shoot. inconvenient;
2、本发明的立体全景图的生成过程无需人工干预,自动化程度高,能为普通大众提供一种便捷的生成全景立体图像的方法。2. The generation process of the stereoscopic panorama of the present invention does not require manual intervention, has a high degree of automation, and can provide a convenient method for generating panoramic stereoscopic images for the general public.
因此,本发明的立体全景图像生成方法简单,仅需一个摄像机对场景进行旋转拍摄即可,适用于普通用户使用一般视频设备沿近似固定的视点环绕拍摄的视频序列生成立体全景图,在各种真三维显示设备(如红绿眼镜)上都可进行真三维立体观察。Therefore, the stereoscopic panoramic image generation method of the present invention is simple, only needs one camera to rotate and shoot the scene, and is suitable for ordinary users to use general video equipment to generate stereoscopic panoramic images from a video sequence that is surrounded by approximately fixed viewpoints. True three-dimensional stereo observation can be performed on true three-dimensional display devices (such as red and green glasses).
附图说明Description of drawings
图1是本发明实施例的流程图;Fig. 1 is the flowchart of the embodiment of the present invention;
图2本发明实施例的视频图像圆柱投影;The video image cylindrical projection of Fig. 2 embodiment of the present invention;
图3本发明实施例的中心圆投影原理;The central circle projection principle of the embodiment of the present invention in Fig. 3;
图4本发明实施例的中心圆投影简化计算模型;The simplified calculation model of the central circle projection of the embodiment of the present invention in Fig. 4;
图5是本发明视频序列的柱面投影与左、右条带图像提取。Fig. 5 shows the cylindrical projection and left and right strip image extraction of the video sequence of the present invention.
具体实施方式Detailed ways
下面结合附图和实施例作进一步详细说明。Further detailed description will be given below in conjunction with the accompanying drawings and embodiments.
如图1所示,利用旋转拍摄的视频序列影像构建场景立体全景图方法,该方法包括以下四个部分:As shown in Figure 1, the method of constructing a stereoscopic panorama of a scene using the video sequence images shot by rotation includes the following four parts:
步骤1对视频帧图像进行逐一读取,提取连续图像间特征匹配对,并估计摄像机的内外参数;Step 1: Read the video frame images one by one, extract feature matching pairs between consecutive images, and estimate the internal and external parameters of the camera;
步骤2利用图像间的特征匹配点计算摄像机旋转的虚拟速度,并根据摄像机的旋转速度计算条带大小;Step 2 calculates the virtual speed of the camera rotation by using the feature matching points between the images, and calculates the band size according to the rotation speed of the camera;
步骤3根据中心圆投影原理对视频图像进行圆柱投影,利用视差估计值,计算图像条带的位置;Step 3: Perform cylindrical projection on the video image according to the central circle projection principle, and calculate the position of the image strip by using the estimated value of parallax;
步骤4根据条带的大小与位置对视频帧图像进行条带提取,分别提取左条带图像与右条带图像,并将提取的所有左条带图像与右条带图像分别进行拼接生成具有视差的左全景图像和右全景图像,并使用与显示方式相对应的观察设备(以红绿眼镜为例进行说明)进行具有沉浸感的立体观察与体验;Step 4 Extract the strips from the video frame image according to the size and position of the strips, extract the left strip image and the right strip image respectively, and splicing all the extracted left strip images and right strip images respectively to generate The left panoramic image and the right panoramic image, and use the observation equipment corresponding to the display mode (taking red and green glasses as an example) for immersive three-dimensional observation and experience;
本实施例的具体实施步骤如下:The specific implementation steps of this embodiment are as follows:
步骤1对视频帧图像进行逐一读取,提取连续图像间特征匹配对,并估计摄像机的内外参数:Step 1 Read the video frame images one by one, extract feature matching pairs between consecutive images, and estimate the internal and external parameters of the camera:
(1-1)逐一读取视频序列中连续帧图像,采用ORB算法对视频帧图像进行特征匹配,为进一步提炼图像间匹配点对的正确性,采用对称匹配原则进行匹配;(1-1) Read the continuous frame images in the video sequence one by one, and use the ORB algorithm to perform feature matching on the video frame images. In order to further refine the correctness of the matching point pairs between the images, the symmetrical matching principle is used to match;
(1-2)由于噪声和移动目标存在,匹配集合中会存在误匹配点,根据极线原理,两个对应的关键点分布在各自的极线上,表达式如下:(1-2) Due to the existence of noise and moving targets, there will be mismatching points in the matching set. According to the principle of epipolar lines, two corresponding key points are distributed on their respective epipolar lines. The expression is as follows:
其中,u1和u2为两图像对应匹配点坐标;F12为基础矩阵;根据式(1),采用RANSAC方法剔除误匹配点。Among them, u 1 and u 2 are the coordinates of the matching points corresponding to the two images; F 12 is the fundamental matrix; according to formula (1), the RANSAC method is used to eliminate the wrong matching points.
(1-3)根据正确的匹配点集,计算图像间的单应变换矩阵,根据单应变换矩阵计算摄像机的内外参数矩阵,根据光束平差原理修正摄像机的内外参数矩阵。(1-3) Calculate the homography transformation matrix between images according to the correct matching point set, calculate the internal and external parameter matrix of the camera according to the homography transformation matrix, and correct the internal and external parameter matrix of the camera according to the beam adjustment principle.
(1)图像单应性矩阵变换(1) Image homography matrix transformation
两图像的n(>4)对匹配点可在相差一个常数因子的意义下计算单应矩阵,其算法如下:n(>4) pairs of matching points of two images The homography matrix can be calculated in the sense of differing by a constant factor, the algorithm is as follows:
令
(x,y,1,0,0,0,x'x,x'y)h=x' (2)(x,y,1,0,0,0,x'x,x'y)h=x' (2)
(0,0,0,x,y,1,y'x,y'y)h=y'(0,0,0,x,y,1,y'x,y'y)h=y'
这样选取4对以上的匹配点即可求解出单应性矩阵H。In this way, the homography matrix H can be obtained by selecting more than 4 pairs of matching points.
(2)摄像机焦距估计(2) Camera focal length estimation
首先计算视频序列中连续图像对之间的单应性矩阵Hij,根据平面单应矩阵定义,H=K(R+tnT)K-1,有:First calculate the homography matrix H ij between consecutive image pairs in the video sequence, according to the definition of the plane homography matrix, H=K(R+tn T )K -1 , there is:
将(4)式改写为:Rewrite formula (4) as:
其中,
可以得到:can get:
由(6)和(7)两式可以算出:From (6) and (7) two formulas can be calculated:
或者:
f1同样采用类似的方式求出,由于本文假定摄像机在拍摄过程中焦距保持不变,则摄像机焦距f的最终估计可以通过f0和f1的几何平均来计算,在全景图像序列中,从不同的单应矩阵中得到各自的焦距估计,最终的f应取所有焦距的几何平均。最后根据求出的内参矩阵K求取两图像之间的旋转变换矩阵R。f 1 is also obtained in a similar way. Since this paper assumes that the focal length of the camera remains unchanged during the shooting process, the final estimate of the camera focal length f can be calculated by the geometric mean of f 0 and f 1 , In the panoramic image sequence, the respective focal length estimates are obtained from different homography matrices, and the final f should take the geometric mean of all focal lengths. Finally, the rotation transformation matrix R between the two images is calculated according to the obtained internal reference matrix K.
步骤2根据图像间的特征匹配点计算摄像机旋转的虚拟速度,并根据摄像机的旋转速度计算条带大小:Step 2 Calculate the virtual speed of camera rotation according to the feature matching points between images, and calculate the strip size according to the rotation speed of the camera:
(2-1)根据连续两帧图像检测的匹配点对个数计算摄像机旋转的虚拟速度:(2-1) Calculate the virtual speed of camera rotation according to the number of matching point pairs detected in two consecutive frames of images:
其中,Vh是水平方向的虚拟速度,Vp是垂直方向的虚拟速度,p1i(x)表示帧1图像中第i个匹配点的横坐标x,p2i(x)表示帧2图像中第i个匹配点的横坐标x,p1i(y)表示帧1图像中第i个匹配点的纵坐标y,p2i(y)表示帧2图像中第i个匹配点的纵坐标y。Among them, V h is the virtual velocity in the horizontal direction, V p is the virtual velocity in the vertical direction, p 1i (x) represents the abscissa x of the i-th matching point in the frame 1 image, and p 2i (x) represents the The abscissa x of the i-th matching point, p 1i (y) represents the ordinate y of the i-th matching point in the frame 1 image, and p 2i (y) represents the ordinate y of the i-th matching point in the frame 2 image.
(2-2)根据摄像机的旋转速度计算条带大小△D:(2-2) Calculate the stripe size △D according to the rotation speed of the camera:
△D=Vh (12)△D=V h (12)
步骤3根据中心圆投影原理对视频图像进行圆柱投影,利用视差估计值,计算图像条带的位置:Step 3: Carry out cylindrical projection on the video image according to the central circle projection principle, and calculate the position of the image strip by using the estimated value of disparity:
(3-1)对视频图像进行圆柱投影,将其投影到圆柱内表面:(3-1) Carry out cylindrical projection on the video image and project it onto the inner surface of the cylinder:
柱面上的点是由角度θ和高度h参数化决定,如图2所示,对应关系如下:The point on the cylinder is determined by the parameterization of angle θ and height h, as shown in Figure 2, and the corresponding relationship is as follows:
(sinθ,h,cosθ)∝(x,y,f) (13)(sinθ,h,cosθ)∝(x,y,f) (13)
根据(13)式对应关系,可以计算出从图像平面映射为圆柱面的坐标形式:According to the corresponding relationship in (13), the coordinate form mapped from the image plane to the cylindrical surface can be calculated:
其中,(x,y)是平面图像坐标,(x′,y′)是柱面坐标,s是圆柱的半径,一般s=f以最小化图像中心的变形程度。Among them, (x, y) is the plane image coordinates, (x', y') is the cylindrical coordinates, s is the radius of the cylinder, generally s=f to minimize the deformation degree of the image center.
由于柱面是可展曲面,图像在柱面坐标下平移和旋转可保持形状不变。逆映射公式表达如下:Since the cylinder is a developable surface, the image can be translated and rotated in cylindrical coordinates to maintain the shape. The inverse mapping formula is expressed as follows:
(3-2)为了实现立体视觉,根据中心圆投影原理,如图3所示,在圆柱内设立直径为人眼基距2倍的中心圆,在中心圆的左、右两侧各摆设一个虚拟摄像机VL,VR,距离记为2d。真实摄像机摆放于光心O处,旋转轴半径为r,摄像机绕旋转轴进行旋转360度连续视频拍摄,拍摄的各帧图像投影至圆柱投影表面,虚拟相机的光线沿光心O处投影至圆柱投影面。(3-2) In order to realize stereoscopic vision, according to the central circle projection principle, as shown in Figure 3, a central circle with a diameter twice the base distance of the human eye is set up in the cylinder, and a virtual center circle is placed on the left and right sides of the central circle. Cameras V L , VR , the distance is recorded as 2d. The real camera is placed at the optical center O, and the radius of the rotation axis is r. The camera rotates 360 degrees around the rotation axis for continuous video shooting. The captured images are projected onto the cylindrical projection surface, and the light rays of the virtual camera are projected along the optical center O to Cylindrical projection surface.
(3-3)为便于计算条带位置,将中心圆投影模型进行简化,如图4中所示,左、右虚拟摄像机VR,VL之间相差2d,旋转半径记为r,摄像机焦距由步骤1估计得到,记为f,左右条带位置之差记为2v。根据相似三角形原理,可计算条带与视差之间的关系(见图4):(3-3) In order to facilitate the calculation of the strip position, the central circle projection model is simplified, as shown in Figure 4, the difference between the left and right virtual cameras V R and V L is 2d, the radius of rotation is denoted as r, and the focal length of the camera Estimated from step 1, denoted as f, and the difference between the left and right strip positions is denoted as 2v. According to the principle of similar triangles, the relationship between strips and parallax can be calculated (see Figure 4):
式中,d是设定的人眼基距,r是摄像机旋转拍摄时的旋转半径平均值,f是步骤1计算出来摄像机内参。根据比例式求出v之后,就可以计算出条带对应的位置。In the formula, d is the set human eye base distance, r is the average value of the rotation radius when the camera rotates and shoots, and f is the internal reference of the camera calculated in step 1. After v is obtained according to the proportional formula, the corresponding position of the strip can be calculated.
步骤4根据计算得到的条带大小与位置对视频帧图像进行条带提取,分别提取左条带图像与右条带图像,并将提取的所有左条带图像与右条带图像分别进行拼接生成具有视差的左全景图像和右全景图像,并使用与显示方式相对应的观察设备(以红绿眼镜为例)就可观察具有沉浸感的全景立体图,其具体的步骤如下:Step 4 Extract the strips from the video frame image according to the calculated strip size and position, extract the left strip image and the right strip image respectively, and splicing all the extracted left strip images and right strip images respectively to generate The left panoramic image and the right panoramic image with parallax, and using the observation equipment corresponding to the display mode (taking red and green glasses as an example), can observe the immersive panoramic stereoscopic image. The specific steps are as follows:
(4-1)对于视频序列,产生的图像较多,需要对各帧图像全部按中心圆法进行圆柱投影,得到如图5所示的投影视频序列。(4-1) For the video sequence, there are many images generated, and it is necessary to perform cylindrical projection on all the images of each frame according to the central circle method, and obtain the projected video sequence as shown in FIG. 5 .
(4-2)按计算得到的条带大小与位置对视频帧图像进行条带提取,分别提取左条带图像与右条带图像(见图5);(4-2) strip extraction is carried out to the video frame image according to the strip size and position obtained by calculation, respectively extracting the left strip image and the right strip image (see Fig. 5);
(4-3)对提取出的所有左、右条带图像分别进行全景拼接,生成具有视差的左全景图与右全景图;(4-3) Panoramic splicing is carried out respectively to all extracted left and right strip images to generate a left panorama and a right panorama with parallax;
(4-4)对左、右全景图像分别采用红绿合成方法对全景图像进行红绿合成生成红绿立体全景图像,合成方法如下:(4-4) adopt the red-green synthesis method respectively to the left and right panorama images to carry out the red-green synthesis to the panorama image to generate the red-green three-dimensional panorama image, and the synthesis method is as follows:
对左全景图像处理过程如下:The process of processing the left panoramic image is as follows:
(1)选择一张与左全景图像同样大小的品色掩码图像,掩码图像的每个像素的RGB值为(255,0,0)。(1) Select a magenta mask image with the same size as the left panoramic image, and the RGB value of each pixel of the mask image is (255,0,0).
(2)将左全景图像的每个像素分别与掩码图像的每个像素进行“或”运算,得到一张新的左全景图像。(2) Each pixel of the left panoramic image is ORed with each pixel of the mask image to obtain a new left panoramic image.
计算方法如下:The calculation method is as follows:
设左全景图像中的某像素(i,j)的彩色RGB为(r,g,b),红色掩码图像的同一位置(i,j)的彩色RGB为(255,0,0),则进行“或”运算,则得到新的像素的彩色RGB为(255,g,b)。Let the color RGB of a pixel (i, j) in the left panoramic image be (r, g, b), and the color RGB of the same position (i, j) in the red mask image be (255, 0, 0), then Perform "or" operation, and the color RGB of the new pixel is (255, g, b).
类似地,对右全景图像也可以采用同样的方法与绿色掩码图像(RGB(0,255,0))进行“或”运算,得到一张新的右全景图像。Similarly, the same method can be used for the right panoramic image to perform "OR" operation with the green mask image (RGB(0,255,0)) to obtain a new right panoramic image.
新的左全景图像A与新的右全景图像B生成之后,需要将A、B两张全景图像进行合成出一张红绿全景图像才能显示出立体效果,因此,需要对左、右两张全景图像进行“与”运算。After the new left panorama image A and the new right panorama image B are generated, it is necessary to combine the two panorama images A and B into a red-green panorama image to display the stereoscopic effect. Images are ANDed.
设A全景图像中某像素(i,j)的RGB值(255,g,b),B全景图像中对应像素的RGB值(r,255,b),进行“与”运算,得到一个新的RGB值为(r',g',b')。对A、B图像中的所有的像素进行“与”运算,可以得到一张新的合成后的红绿全景图像。Set the RGB value (255, g, b) of a pixel (i, j) in the A panoramic image, and the RGB value (r, 255, b) of the corresponding pixel in the B panoramic image, and perform "AND" operation to obtain a new The RGB values are (r',g',b'). Perform an "AND" operation on all the pixels in the A and B images to obtain a new composite red-green panoramic image.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510373958.XA CN105025287A (en) | 2015-06-30 | 2015-06-30 | A Method for Constructing a Stereoscopic Panorama of a Scene Using Rotated Video Sequence Images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510373958.XA CN105025287A (en) | 2015-06-30 | 2015-06-30 | A Method for Constructing a Stereoscopic Panorama of a Scene Using Rotated Video Sequence Images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105025287A true CN105025287A (en) | 2015-11-04 |
Family
ID=54414950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510373958.XA Pending CN105025287A (en) | 2015-06-30 | 2015-06-30 | A Method for Constructing a Stereoscopic Panorama of a Scene Using Rotated Video Sequence Images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105025287A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106331685A (en) * | 2016-11-03 | 2017-01-11 | Tcl集团股份有限公司 | Method and apparatus for acquiring 3D panoramic image |
CN106355550A (en) * | 2016-10-31 | 2017-01-25 | 微景天下(北京)科技有限公司 | Image stitching system and image stitching method |
CN107454375A (en) * | 2017-01-24 | 2017-12-08 | 江苏思特威电子科技有限公司 | 3D panoramic imaging devices and method |
CN109064397A (en) * | 2018-07-04 | 2018-12-21 | 广州希脉创新科技有限公司 | A kind of image split-joint method and system based on camera shooting earphone |
CN111698463A (en) * | 2019-03-11 | 2020-09-22 | 辉达公司 | View synthesis using neural networks |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000039995A2 (en) * | 1998-09-17 | 2000-07-06 | Yissum Research Development Company | System and method for generating and displaying panoramic images and movies |
US20010038413A1 (en) * | 2000-02-24 | 2001-11-08 | Shmuel Peleg | System and method for facilitating the adjustment of disparity in a stereoscopic panoramic image pair |
CN103109538A (en) * | 2010-09-22 | 2013-05-15 | 索尼公司 | Image processing device, image capture device, image processing method, and program |
CN103109537A (en) * | 2010-09-22 | 2013-05-15 | 索尼公司 | Image processing device, imaging device, and image processing method and program |
CN103260046A (en) * | 2012-02-16 | 2013-08-21 | 中兴通讯股份有限公司 | Three-dimensional display method and system |
-
2015
- 2015-06-30 CN CN201510373958.XA patent/CN105025287A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000039995A2 (en) * | 1998-09-17 | 2000-07-06 | Yissum Research Development Company | System and method for generating and displaying panoramic images and movies |
US20010038413A1 (en) * | 2000-02-24 | 2001-11-08 | Shmuel Peleg | System and method for facilitating the adjustment of disparity in a stereoscopic panoramic image pair |
CN103109538A (en) * | 2010-09-22 | 2013-05-15 | 索尼公司 | Image processing device, image capture device, image processing method, and program |
CN103109537A (en) * | 2010-09-22 | 2013-05-15 | 索尼公司 | Image processing device, imaging device, and image processing method and program |
CN103260046A (en) * | 2012-02-16 | 2013-08-21 | 中兴通讯股份有限公司 | Three-dimensional display method and system |
Non-Patent Citations (2)
Title |
---|
SHMUEL PELEG, MOSHE BEN-EZRA: "《Stereo Panorama with a Single Camera》", 《COMPUTER VISION AND PATTERN RECOGNITION,1999.IEEE COMPUTER SOCIETY CONFERENCE ON. VOL.1.IEEE》 * |
李佳等: "《基于未标定普通相机的全景图像拼接方法》", 《系统仿真学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106355550A (en) * | 2016-10-31 | 2017-01-25 | 微景天下(北京)科技有限公司 | Image stitching system and image stitching method |
CN106355550B (en) * | 2016-10-31 | 2024-04-09 | 河北鼎联科技有限公司 | Image stitching system and image stitching method |
CN106331685A (en) * | 2016-11-03 | 2017-01-11 | Tcl集团股份有限公司 | Method and apparatus for acquiring 3D panoramic image |
CN107454375A (en) * | 2017-01-24 | 2017-12-08 | 江苏思特威电子科技有限公司 | 3D panoramic imaging devices and method |
CN109064397A (en) * | 2018-07-04 | 2018-12-21 | 广州希脉创新科技有限公司 | A kind of image split-joint method and system based on camera shooting earphone |
CN111698463A (en) * | 2019-03-11 | 2020-09-22 | 辉达公司 | View synthesis using neural networks |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11869205B1 (en) | Techniques for determining a three-dimensional representation of a surface of an object from a set of images | |
CN104504671B (en) | Method for generating virtual-real fusion image for stereo display | |
US11170561B1 (en) | Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats | |
CN103345771B (en) | A kind of Efficient image rendering intent based on modeling | |
TW201915944A (en) | Image processing method, apparatus, and storage medium | |
CN110288712B (en) | Sparse multi-view 3D reconstruction method for indoor scenes | |
US10419669B2 (en) | Omnistereoscopic panoramic video | |
CN101729920B (en) | Method for displaying stereoscopic video with free visual angles | |
CN105262958B (en) | A kind of the panorama feature splicing system and its method of virtual view | |
CN106910222A (en) | Face three-dimensional rebuilding method based on binocular stereo vision | |
CN104599317B (en) | A kind of mobile terminal and method for realizing 3D scanning modeling functions | |
CN103456038A (en) | Method for rebuilding three-dimensional scene of downhole environment | |
CN103236082A (en) | Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes | |
CN108520232A (en) | Method and device for generating three-dimensional panoramic film | |
CN105025287A (en) | A Method for Constructing a Stereoscopic Panorama of a Scene Using Rotated Video Sequence Images | |
CN102663753A (en) | Body surface rebuilding method and body surface rebuilding device based on multiple visual angles of binocular stereoscopic vision | |
CN103414910B (en) | Low-distortion three-dimensional picture outer polar line correcting method | |
CN106534670B (en) | It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group | |
CN103854301A (en) | 3D reconstruction method of visible shell in complex background | |
CN106920276A (en) | A kind of three-dimensional rebuilding method and system | |
CN101916455A (en) | A method and device for reconstructing a high dynamic range textured 3D model | |
WO2013120308A1 (en) | Three dimensions display method and system | |
CN110782507A (en) | A method, system and electronic device for texture map generation based on face mesh model | |
Fu et al. | Image stitching techniques applied to plane or 3-D models: a review | |
Bourke | Synthetic stereoscopic panoramic images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20151104 |