CN103974055B - 3D photo generation system and method - Google Patents
3D photo generation system and method Download PDFInfo
- Publication number
- CN103974055B CN103974055B CN201410029673.XA CN201410029673A CN103974055B CN 103974055 B CN103974055 B CN 103974055B CN 201410029673 A CN201410029673 A CN 201410029673A CN 103974055 B CN103974055 B CN 103974055B
- Authority
- CN
- China
- Prior art keywords
- image
- module
- eye image
- pixel
- vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/293—Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Processing Or Creating Images (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
技术领域technical field
本发明涉及照片处理系统及方法,更具体地说,涉及一种3D照片处理系统及方法。The present invention relates to a photo processing system and method, more specifically, to a 3D photo processing system and method.
背景技术Background technique
3D照片通常采用微透镜技术来生成。微透镜的透镜为放大镜阵列,该放大镜阵列设计成当从稍微不同的角度观察时,放大不同的图像。为了生成3D照片,首先需要生成多视点图像,比如说12或更多视点图像,接着多视点图像会混合成混合图像。混合多视点图像是一个从多视点图像中提取合适的像素合并成一个新图像的过程,新图像包含原始图像的多视点信息。微透镜的透镜用来在不同的视角显示这些多视点的视角。最后,透过微透镜板观看,观看者的左眼和右眼可以看到不同的图像,从而产生3D的效果。3D photos are usually produced using microlens technology. The lenses of the microlenses are arrays of magnifying lenses designed to magnify different images when viewed from slightly different angles. In order to generate a 3D photo, it is first necessary to generate multi-viewpoint images, say 12 or more viewpoint images, and then the multi-viewpoint images will be blended into a blended image. Blending multi-view images is a process of extracting suitable pixels from multi-view images and merging them into a new image, which contains multi-view information of the original image. The lenses of the microlenses are used to display these multi-view perspectives at different viewing angles. Finally, viewing through the lenticular plate, the viewer's left and right eyes can see different images, thus producing a 3D effect.
目前,有一些生成3D照片的不同方法,其中,最常用的方法是采用手动将2D图像转换成多视点图像,这种方法需要数小时甚至数天的时间。通常操作者需要通过创建蒙皮从目标图像中提取对象,然后根据自己的判断给这些蒙皮确定深度信息。深度信息是和原始2D图像具有同样尺寸的单独的灰度图像,灰度图像用灰色表示图像的每个部分的深度。手动创建的深度信息用来引导计算机移动原始2D图像的像素以形成新的视点图。深度图可以产生强烈3D显示效果。Currently, there are several different methods of generating 3D photos, of which the most common method is to manually convert 2D images into multi-viewpoint images, which takes hours or even days. Usually the operator needs to extract objects from the target image by creating skins, and then assign depth information to these skins according to his own judgment. The depth information is a separate grayscale image with the same size as the original 2D image, and the grayscale image uses gray to represent the depth of each part of the image. The manually created depth information is used to guide the computer to move the pixels of the original 2D image to form a new viewpoint map. Depth maps can produce strong 3D display effects.
另外一种方法是从多视点拍摄物体,但是,对于运动物体,这种方法并不是一种可行的方法。这种方法需要布置一台或多台摄像机来获取多视点图像。图像获取装置需要小心定位以使输出图像的视角不会过宽。Another method is to shoot objects from multiple viewpoints, however, this method is not a feasible method for moving objects. This method requires the arrangement of one or more cameras to acquire multi-viewpoint images. The image acquisition device needs to be positioned carefully so that the viewing angle of the output image is not too wide.
多视点图像是用来重构混合图像,大多数系统将从多视点图像中提取的数据直接构建混合图像。由于最终图像是每个多视点图像的子样本,这种方法得到的图像不能保持原始图像的质量。Multi-view images are used to reconstruct hybrid images, and most systems will directly construct hybrid images from data extracted from multi-view images. Since the final image is a subsample of each multi-view image, the resulting image cannot maintain the quality of the original image.
综上所述,现有的3D照片生成方法及系统具有处理时间长、照片的质量低等缺点。To sum up, the existing methods and systems for generating 3D photos have disadvantages such as long processing time and low quality of photos.
发明内容Contents of the invention
本发明的目的在于提供一种处理速度快、照片质量高的3D照片生成系统及方法。The object of the present invention is to provide a 3D photo generation system and method with fast processing speed and high photo quality.
本发明的3D照片生成系统包括:立体图像输入模块,用于输入立体图像,所述立体图像包括左眼图像和右眼图像;The 3D photo generation system of the present invention includes: a stereoscopic image input module for inputting a stereoscopic image, and the stereoscopic image includes a left-eye image and a right-eye image;
深度估算模块,用于估算所述立体图像的深度信息并生成深度图;a depth estimation module, configured to estimate the depth information of the stereo image and generate a depth map;
多视点图像重建模块,用于根据所述深度图和所述立体图像生成多视点图像;A multi-viewpoint image reconstruction module, configured to generate a multi-viewpoint image according to the depth map and the stereoscopic image;
图像隔行扫描模块,用于对所述多视点图像进行调整并形成混合图像。The image interlaced scanning module is used to adjust the multi-viewpoint image and form a mixed image.
在本发明所述的3D照片生成系统中,所述深度估算模块包括:In the 3D photo generation system of the present invention, the depth estimation module includes:
像素匹配模块,用于比较所述立体图像的左眼图像和右眼图像并找出所述左眼图像和右眼图像之间的相对像素,根据光流约束方程估算出像素的光流;The pixel matching module is used to compare the left-eye image and the right-eye image of the stereoscopic image and find the relative pixels between the left-eye image and the right-eye image, and estimate the optical flow of the pixel according to the optical flow constraint equation;
深度信息确定模块,用于根据所述左眼图像和右眼图像的光流,找出像素位移来确定像素的深度信息;A depth information determination module, configured to find pixel displacement to determine pixel depth information according to the optical flow of the left-eye image and the right-eye image;
深度图生成模块,用于根据所述深度信息生成深度图。A depth map generating module, configured to generate a depth map according to the depth information.
在本发明所述的3D照片生成系统中,所述多视点图像重构模块包括:In the 3D photo generation system of the present invention, the multi-viewpoint image reconstruction module includes:
基础图像选择模块,用于选择所述立体图像的左眼图像、右眼图像或左眼图像和右眼图像作为基础图像;A basic image selection module, configured to select the left-eye image, the right-eye image, or the left-eye image and the right-eye image of the stereoscopic image as the basic image;
图像数目确定模块,用于根据需求确定所需图像的数目和视差;Image number determination module, used to determine the number and disparity of required images according to requirements;
像素移动模块,用于根据所述深度图移动所述基础图像的像素以形成新图像;a pixel shifting module for shifting pixels of the base image according to the depth map to form a new image;
填洞模块,用于填充所述新图像中由于像素丢失形成的孔洞;A hole filling module, configured to fill holes in the new image due to pixel loss;
多视点图像生成模块,用于生成多视点图像。The multi-viewpoint image generation module is used to generate multi-viewpoint images.
在本发明所述的3D照片生成系统中,所述图像隔行扫描模块包括:In the 3D photo generation system of the present invention, the image interlaced scanning module includes:
图像调整模块,用于调整所述多视点图像的尺寸;an image adjustment module, configured to adjust the size of the multi-view image;
对比度调整模块,用于调整所述图像调整模块输出的调整后的多视点图像的对比度;a contrast adjustment module, configured to adjust the contrast of the adjusted multi-viewpoint image output by the image adjustment module;
图像合成模块,用于将经过对比度调整后的多视点图像合并形成混合图像;An image synthesis module, which is used to merge the multi-viewpoint images after contrast adjustment to form a mixed image;
混合图像输出模块,用于输出所述混合图像。A mixed image output module, configured to output the mixed image.
填洞模块采用插值法填充所述新图像中由于像素丢失形成的孔洞。The hole filling module uses an interpolation method to fill holes in the new image due to missing pixels.
本发明的3D照片生成方法,包括如下步骤:3D photo generation method of the present invention, comprises the steps:
S1输入立体图像,所述立体图像包括左眼图像和右眼图像;S1 inputs a stereoscopic image, the stereoscopic image includes a left-eye image and a right-eye image;
S2估算所述立体图像的深度信息并生成深度图;S2 estimates the depth information of the stereo image and generates a depth map;
S3根据所述深度图和所述立体图像生成多视点图像;S3 generates a multi-viewpoint image according to the depth map and the stereo image;
S4对所述多视点图像进行调整并形成混合图像。S4 adjusts the multi-viewpoint image and forms a mixed image.
在本发明所述的3D照片生成方法中,所述步骤S2包括如下步骤:In the 3D photo generation method of the present invention, the step S2 includes the following steps:
S21比较所述立体图像的左眼图像和右眼图像并找出所述左眼图像和右眼图像之间的相对像素,根据光流约束方程估算出像素的光流;S21 comparing the left-eye image and the right-eye image of the stereoscopic image and finding relative pixels between the left-eye image and the right-eye image, and estimating the optical flow of the pixel according to the optical flow constraint equation;
S22根据所述左眼图像和右眼图像的的光流,找出像素位移来确定像素的深度信息;S22 Find out the pixel displacement to determine the depth information of the pixel according to the optical flow of the left-eye image and the right-eye image;
S23根据所述深度信息生成深度图。S23 Generate a depth map according to the depth information.
在本发明所述的3D照片生成方法中,所述步骤S3包括:In the 3D photo generating method of the present invention, the step S3 includes:
S31选择所述立体图像的左眼图像、右眼图像或左眼图像和右眼图像作为基础图像;S31 selecting the left-eye image, the right-eye image, or the left-eye image and the right-eye image of the stereoscopic image as a base image;
S32根据需求确定所需图像的数目和视差;S32 determines the number and disparity of required images according to requirements;
S33根据所述深度图移动所述基础图像的像素以形成新图像;S33 moving pixels of the base image according to the depth map to form a new image;
S34填充所述新图像中由于像素丢失形成的孔洞;S34 filling holes in the new image due to pixel loss;
S35生成多视点图像。S35 Generate a multi-viewpoint image.
在本发明所述的3D照片生成方法中,所述步骤S4包括:In the 3D photo generating method of the present invention, the step S4 includes:
S41调整所述多视点图像的尺寸;S41 adjusts the size of the multi-view image;
S42调整经所述步骤S41调整后的多视点图像的对比度;S42 adjusts the contrast of the multi-viewpoint image adjusted by the step S41;
S43将经过对比度调整后的多视点图像合并形成混合图像;S43 merging the contrast-adjusted multi-viewpoint images to form a mixed image;
S44输出所述混合图像。S44 outputs the mixed image.
在本发明所述的3D照片生成方法中,所述步骤34中采用插值法填充所述新图像中由于像素丢失形成的孔洞。In the method for generating a 3D photo according to the present invention, in the step 34, an interpolation method is used to fill holes in the new image due to pixel loss.
实施本发明的3D照片生成系统及方法,具有以下有益效果:本发明的3D照片生成系统及方法,显著的简化了3D照片生成过程,提高了3D照片的质量。本发明的3D照片生成系统及方法可以广泛应用于各种主题公园、旅游景点以及照相馆。将会使更多的消费者享受3D照片带来的乐趣。Implementing the 3D photo generating system and method of the present invention has the following beneficial effects: the 3D photo generating system and method of the present invention significantly simplifies the 3D photo generating process and improves the quality of the 3D photo. The 3D photo generation system and method of the present invention can be widely used in various theme parks, tourist attractions and photo studios. Will make more consumers enjoy the fun brought by 3D photos.
附图说明Description of drawings
下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below in conjunction with accompanying drawing and embodiment, in the accompanying drawing:
图1是本发明的3D照片生成系统的系统框图;Fig. 1 is a system block diagram of the 3D photo generation system of the present invention;
图2是本发明的3D照片生成系统中深度估算模块的框图;Fig. 2 is a block diagram of the depth estimation module in the 3D photo generation system of the present invention;
图3是本发明的3D照片生成系统中多视点图像重建模块的框图;Fig. 3 is a block diagram of a multi-viewpoint image reconstruction module in the 3D photo generation system of the present invention;
图4是本发明的3D照片生成系统中图像隔行扫描模块的框图;Fig. 4 is the block diagram of the image interlaced scanning module in the 3D photo generation system of the present invention;
图5是本发明的3D照片生成方法的流程图;Fig. 5 is a flow chart of the 3D photo generating method of the present invention;
图6是本发明的3D照片生成方法中步骤S2的流程图;Fig. 6 is the flowchart of step S2 in the 3D photo generation method of the present invention;
图7是本发明的3D照片生成方法中步骤S3的流程图;Fig. 7 is the flowchart of step S3 in the 3D photo generation method of the present invention;
图8是本发明的3D照片生成方法中步骤S4的流程图;Fig. 8 is a flowchart of step S4 in the 3D photo generation method of the present invention;
图9是本发明的3D照片生成系统所输入的立体图像的示意图;9 is a schematic diagram of a stereoscopic image input by the 3D photo generation system of the present invention;
图10是本发明的3D照片生成系统所形成的深度图与原始图像对比的示意图;Fig. 10 is a schematic diagram of the comparison between the depth map and the original image formed by the 3D photo generation system of the present invention;
图11是调整后的多视点图像;Figure 11 is the adjusted multi-view image;
图12是混合图像的示意图。Fig. 12 is a schematic diagram of a mixed image.
具体实施方式detailed description
为了对本发明的技术特征、目的和效果有更加清楚的理解,现对照附图详细说明本发明的具体实施方式。In order to have a clearer understanding of the technical features, purposes and effects of the present invention, the specific implementation manners of the present invention will now be described in detail with reference to the accompanying drawings.
如图1至图4所示,为本发明的3D照片生成系统的一个实施例的系统框图,该3D照片生成系统包括立体图像输入模块1、深度估算模块2、多视点图像重构模块3和图像隔行扫描模块4。其中,立体图像输入模块1用于输入立体图像,立体图像包括左眼图像和右眼图像;深度估算模块2用于估算立体图像的深度信息并生成深度图;多视点图像重建模块3用于根据深度图和立体图像生成多视点图像;图像隔行扫描模块4用于对多视点图像进行调整并形成混合图像。As shown in Figures 1 to 4, it is a system block diagram of an embodiment of the 3D photo generation system of the present invention, the 3D photo generation system includes a stereoscopic image input module 1, a depth estimation module 2, a multi-viewpoint image reconstruction module 3 and Image interlacing module4. Wherein, the stereoscopic image input module 1 is used for inputting the stereoscopic image, and the stereoscopic image includes a left-eye image and a right-eye image; the depth estimation module 2 is used for estimating the depth information of the stereoscopic image and generating a depth map; the multi-viewpoint image reconstruction module 3 is used for The depth map and the stereoscopic image generate a multi-viewpoint image; the image interlacing module 4 is used to adjust the multi-viewpoint image and form a mixed image.
在本发明的3D照片生成系统中,深度估算模块2进一步包括:像素匹配模块21、深度信息确定模块22和深度图生成模块23。其中,像素匹配模块21用于比较立体图像的左眼图像和右眼图像并找出左眼图像和右眼图像之间的相对像素,根据光流约束方程估算出像素的光流;其中,相对像素是指左眼图像和右眼图像的同一像素位置的像素。深度信息确定模块22用于根据左眼图像和右眼图像的光流找出像素位移来确定像素的深度信息;深度图生成模块23用于根据深度信息生成深度图。In the 3D photo generation system of the present invention, the depth estimation module 2 further includes: a pixel matching module 21 , a depth information determination module 22 and a depth map generation module 23 . Wherein, the pixel matching module 21 is used to compare the left-eye image and the right-eye image of the stereoscopic image and find out the relative pixel between the left-eye image and the right-eye image, and estimate the optical flow of the pixel according to the optical flow constraint equation; A pixel refers to a pixel at the same pixel position of the left-eye image and the right-eye image. The depth information determination module 22 is used to find pixel displacement according to the optical flow of the left-eye image and the right-eye image to determine the depth information of the pixel; the depth map generation module 23 is used to generate a depth map according to the depth information.
在本发明的3D照片生成系统中,多视点图像重构模块3进一步包括:基础图像选择模块31、图像数目确定模块32、像素移动模块33、填洞模块34和多视点图像生成模块35。其中,基础图像选择模块31用于选择立体图像的左眼图像、右眼图像或左眼图像和右眼图像作为基础图像;图像数目确定模块32用于根据需求确定所需图像的数目和视差;像素移动模块33用于根据深度图移动基础图像的像素以形成新图像;填洞模块34用于填充新图像中由于像素丢失形成的孔洞;多视点图像生成模块35用于生成多视点图像。In the 3D photo generation system of the present invention, the multi-view image reconstruction module 3 further includes: a basic image selection module 31, an image number determination module 32, a pixel shift module 33, a hole filling module 34 and a multi-view image generation module 35. Wherein, the basic image selection module 31 is used to select the left-eye image, the right-eye image or the left-eye image and the right-eye image of the stereoscopic image as the basic image; the image number determination module 32 is used to determine the number and parallax of the required images according to requirements; The pixel shifting module 33 is used to move the pixels of the base image according to the depth map to form a new image; the hole filling module 34 is used to fill holes in the new image due to missing pixels; the multi-viewpoint image generation module 35 is used to generate multi-viewpoint images.
在本发明的3D照片生成系统中,图像隔行扫描模块4进一步包括:图像调整模块41、对比度调整模块42、图像合成模块43和混合图像输出模块44。其中,图像调整模块41用于调整多视点图像的尺寸;对比度调整模块42用于调整图像调整模块输出的调整后的多视点图像的对比度;图像合成模块43,用于将经过对比度调整后的多视点图像合并形成混合图像;混合图像输出模块44,用于输出混合图像。In the 3D photo generation system of the present invention, the image interlaced scanning module 4 further includes: an image adjustment module 41 , a contrast adjustment module 42 , an image synthesis module 43 and a mixed image output module 44 . Wherein, the image adjustment module 41 is used to adjust the size of the multi-viewpoint image; the contrast adjustment module 42 is used to adjust the contrast of the adjusted multi-viewpoint image output by the image adjustment module; The viewpoint images are combined to form a mixed image; the mixed image output module 44 is configured to output the mixed image.
如图5至图8所示,为本发明的3D照片生成方法的流程图,其包括如下步骤:As shown in Figure 5 to Figure 8, it is a flow chart of the 3D photo generation method of the present invention, which includes the following steps:
S1输入立体图像,立体图像包括左眼图像和右眼图像;S1 inputs a stereoscopic image, the stereoscopic image includes a left-eye image and a right-eye image;
S2估算立体图像的深度信息并生成深度图;S2 estimates the depth information of the stereo image and generates a depth map;
S3根据深度图和立体图像生成多视点图像;S3 generates a multi-viewpoint image according to the depth map and the stereo image;
S4对多视点图像进行调整并形成混合图像。S4 adjusts the multi-viewpoint images and forms a mixed image.
其中,步骤S2进一步包括如下步骤:Wherein, step S2 further includes the following steps:
S21,比较立体图像的左眼图像和右眼图像并找出左眼图像和右眼图像之间的相对像素,根据光流约束方程估算出像素的光流;S21, comparing the left-eye image and the right-eye image of the stereoscopic image and finding the relative pixels between the left-eye image and the right-eye image, and estimating the optical flow of the pixel according to the optical flow constraint equation;
S22根据左眼图像和右眼图像的光流,找出像素位移来确定像素的深度信息;S22 Find out the pixel displacement to determine the depth information of the pixel according to the optical flow of the left-eye image and the right-eye image;
S23根据深度信息生成深度图。S23 Generate a depth map according to the depth information.
步骤S3进一步包括:Step S3 further includes:
S31选择立体图像的左眼图像、右眼图像或左眼图像和右眼图像作为基础图像;S31 selecting the left-eye image, the right-eye image, or the left-eye image and the right-eye image of the stereoscopic image as the base image;
S32根据需求确定所需图像的数目和视差;S32 determines the number and disparity of required images according to requirements;
S33根据深度图移动基础图像的像素以形成新图像;S33 moving pixels of the base image according to the depth map to form a new image;
S34填充新图像中由于像素丢失形成的孔洞;S34 filling holes formed due to pixel loss in the new image;
S35生成多视点图像。S35 Generate a multi-viewpoint image.
步骤S4进一步包括:Step S4 further includes:
S41调整多视点图像的尺寸;S41 adjusts the size of the multi-view image;
S42调整经步骤S41调整后的多视点图像的对比度;S42 adjusts the contrast of the multi-viewpoint image adjusted in step S41;
S43将经过对比度调整后的多视点图像合并形成混合图像;S43 merging the contrast-adjusted multi-viewpoint images to form a mixed image;
S44输出混合图像。S44 outputs the mixed image.
以上介绍了本发明的3D照片生成系统的构成以及本发明的3D照片生成方法的具体步骤,下面结合一个具体的例子详述本发明的3D照片生成系统和方法是如何工作的。本发明的3D照片生成系统采用立体图像作为输入,根据立体图像自动进行比较并计算3D信息(也称之为深度图)。然后基于深度信息移动原始输入图像的像素来生成多视点图像。为了提高最终混合图像的质量,本发明的3D照片生成系统会对生成的图像进行调整以形成合适尺寸,然后将调整后的图像合并在一起。最后形成的混合图像可以显示在免带眼镜的3D显示装置上,或者合成在微透镜板上形成3D照片。The composition of the 3D photo generation system of the present invention and the specific steps of the 3D photo generation method of the present invention have been introduced above, and how the 3D photo generation system and method of the present invention work will be described in detail below in conjunction with a specific example. The 3D photo generation system of the present invention uses stereoscopic images as input, automatically compares and calculates 3D information (also called a depth map) based on the stereoscopic images. The pixels of the original input image are then shifted based on the depth information to generate a multi-view image. In order to improve the quality of the final mixed image, the 3D photo generation system of the present invention will adjust the generated image to form a suitable size, and then merge the adjusted images together. The resulting mixed image can be displayed on a glasses-free 3D display device, or composited on a microlens plate to form a 3D photo.
在本发明的3D照片生成系统中,立体图像输入装置1用于输入立体图像,立体图像也即立体图,可以产生三维视觉效果,它可以是任何能够向通过双目立体观察的观察者传达深度感知体验的图像,该立体图可以通过一种或多种技术获得,立体图像也可以直接采用3D图像。在本例中,输入的立体图像为包含左眼图像和右眼图像的立体图像,具体图像如图9所示。In the 3D photo generation system of the present invention, the stereoscopic image input device 1 is used to input a stereoscopic image, and a stereoscopic image is also a stereogram, which can produce a three-dimensional visual effect. The stereoscopic image can be obtained by one or more technologies, and the stereoscopic image can also directly use a 3D image. In this example, the input stereoscopic image is a stereoscopic image including a left-eye image and a right-eye image, and the specific image is shown in FIG. 9 .
深度估算模块2用于分析由立体图像输入模块1输入的立体图像的深度信息,以重建多视点图像,深度估算的步骤如图6所示。深度估算模块2包括像素匹配模块21、深度信息确定模块22和深度图生成模块23。其中,像素匹配模块21用于比较输入的立体图像的左眼图像和右眼图像以找出二者的相对像素,也即左眼图像和右眼图像的同一像素位置的像素。物体在立体图像的左眼图像中和右眼图像中存在位移,也称之为视差。为了提取视差,通常採用光流和立体匹配等匹配方法找出左眼图像和右眼图像之间的像素位移。光流是由观察者(例如眼镜或摄像机)和场景之间的相对运动所引起的可视场景中的对象、表面或边缘的视运动的模式。光流估算是根据光流约束方程估算光流。为了找出匹配像素,需要比较图像,并遵循着名的光流约束方程:The depth estimation module 2 is used to analyze the depth information of the stereo image input by the stereo image input module 1 to reconstruct a multi-viewpoint image. The steps of depth estimation are shown in FIG. 6 . The depth estimation module 2 includes a pixel matching module 21 , a depth information determination module 22 and a depth map generation module 23 . Among them, the pixel matching module 21 is used for comparing the left-eye image and the right-eye image of the input stereoscopic image to find the relative pixels of the two, that is, the pixels at the same pixel position of the left-eye image and the right-eye image. Objects are displaced in the left-eye image and in the right-eye image of the stereoscopic image, which is also called parallax. To extract disparity, matching methods such as optical flow and stereo matching are usually used to find out the pixel displacement between the left-eye image and the right-eye image. Optical flow is the pattern of apparent motion of objects, surfaces, or edges in a visible scene caused by relative motion between an observer (such as glasses or a camera) and the scene. Optical flow estimation is to estimate the optical flow according to the optical flow constraint equation. In order to find matching pixels, the images need to be compared, following the well-known optical flow constraint equation:
其中,Vx、Vy分别为速度或光流I(x,y,t)的x和y分量,和是图像在相应的方向上的(x,y,t)处的导数。可以采用由粗到精的策略来确定像素的光流。有一些不同的加强视差估算的鲁棒方法,例如基于卷积理论的高精度光流估算。Among them, V x , V y are the x and y components of velocity or optical flow I(x, y, t), respectively, and is the derivative of the image at (x, y, t) in the corresponding direction. A coarse-to-fine strategy can be used to determine the optical flow of pixels. There are different robust methods for enhancing disparity estimation, such as high-precision optical flow estimation based on convolution theory.
像素匹配之后,深度信息可以由视差信息和摄像机的参数传递。像素的位移可以指示深度形成。然而,大多数3D立体拍摄装置是将摄像机或镜头转换到一个点上。换而言之,在每个像素的深度的计算中需要考虑光流的方向。深度信息确定模块22用于确定像素的深度信息,利用如下的公式可以计算出每个像素的深度信息。After pixel matching, depth information can be conveyed by disparity information and camera parameters. The displacement of pixels can indicate depth formation. However, most 3D stereoscopic devices switch the camera or lens to a point. In other words, the direction of optical flow needs to be considered in the calculation of the depth of each pixel. The depth information determination module 22 is used to determine the depth information of a pixel, and the depth information of each pixel can be calculated using the following formula.
其中maxdisplacement为像素的最大位移,direction为光流方向,u和v分别是每个像素在x和y方向上的光流矢量。该深度信息可以用来重建3D环境,例如深度图。深度图通过由计算机识别的灰度图像来表示。深度图生成模块23用于生成深度图。通常像素的深度值从0到255,像素的深度值越高,表示距离观察者越近。为了提高3D照片的质量,本发明的3D照片生成系统在深度图中分离出前景和后景,系统用深度值在99到255的范围内像素的表示前景,用深度值在0到128范围内的像素表示后景。前景深度信息和后景深度信息具有一定的重叠,在本实施例中,重叠范围为99到128。前景深度信息和后景深度信息重叠范围可以由用户来调整。这个过程可以增加前景和后景的之间的对比度。进一步,可以增强前景主要对象和后景的深度细节。图10为分离前景和后景的图像的示意图。Where maxdisplacement is the maximum displacement of the pixel, direction is the direction of optical flow, and u and v are the optical flow vectors of each pixel in the x and y directions, respectively. This depth information can be used to reconstruct the 3D environment, eg a depth map. A depth map is represented by a grayscale image recognized by a computer. The depth map generation module 23 is used to generate a depth map. Generally, the depth value of a pixel is from 0 to 255, and the higher the depth value of a pixel, the closer it is to the viewer. In order to improve the quality of 3D photos, the 3D photo generation system of the present invention separates the foreground and the background in the depth map, and the system uses depth values in the range of 99 to 255 pixels to represent the foreground, and depth values in the range of 0 to 128 of pixels represent the background. The foreground depth information and the background depth information have a certain overlap, and in this embodiment, the overlap ranges from 99 to 128. The overlapping range of the foreground depth information and the background depth information can be adjusted by the user. This process increases the contrast between the foreground and background. Further, depth details of foreground main objects and background can be enhanced. Fig. 10 is a schematic diagram of separating foreground and background images.
多视点图像重构模块3用于重构多视点图像,包括:基础图像选择模块31、图像数目确定模块32、像素移动模块33、填洞模块34和多视点图像生成模块35。基础图像选择模块31可以选择立体图像的左眼图像、右眼图像或者左眼图像和右眼图像作为基础图像来产生多视点图像。多视点图像重构流程如图7所示,如果选择单个图像,例如左眼图像或右眼图像,那么生成的图像将是所选择的图像的左眼图像和右眼图像,如果需要生成2N+1幅图像,所选的图像将是第N+1幅图像,所生成的图像将是第1至N幅图像和第N+2至2N+1幅图像。例如,如果需要生成9幅图像,所选择的图像将是第5幅图像,所生成的图像将是第1至4幅图像和第6至9幅图像。图像数目确定模块32用于根据需要确定图像的数目。另一方面,如果选择两幅图像(左眼图像和右眼图像)作为基础图像来生成多视点图像,也即选择立体图作为基础图像,系统会利用视差和所需要的图像的数目来决定所选择的两幅图像的位置。在本实施例中,系统会先确定所需要生成的图像的数目。多视点图像的数目取决于微透镜板的每英寸透镜数和打印机的每英寸点数,多视点图像的数目N=DPI/LPI,其中DPI为打印机的每英寸点数,LPI为微透镜板的每英寸透镜数。例如,微透镜板的每英寸透镜数为50,每英寸点数为600,所需要的图像的数目为600/50=12,因此需要12幅图像来构成合适的3D图像。初始的立体图像的位置由以下公式确定:The multi-view image reconstruction module 3 is used to reconstruct multi-view images, including: a basic image selection module 31 , an image number determination module 32 , a pixel shift module 33 , a hole filling module 34 and a multi-view image generation module 35 . The base image selection module 31 may select a left-eye image, a right-eye image, or a left-eye image and a right-eye image of a stereoscopic image as a base image to generate a multi-viewpoint image. The multi-viewpoint image reconstruction process is shown in Figure 7. If a single image is selected, such as a left-eye image or a right-eye image, then the generated image will be the left-eye image and right-eye image of the selected image. If it is necessary to generate 2N+ 1 image, the selected image will be the N+1th image, the generated images will be the 1st to Nth images and the N+2th to 2N+1th images. For example, if 9 images need to be generated, the selected image will be the 5th image and the generated images will be the 1st to 4th images and the 6th to 9th images. The image number determining module 32 is used to determine the number of images as required. On the other hand, if two images (left-eye image and right-eye image) are selected as base images to generate a multi-viewpoint image, that is, a stereo image is selected as a base image, the system will use the disparity and the number of images required to determine the selected The positions of the two images. In this embodiment, the system first determines the number of images to be generated. The number of multi-viewpoint images depends on the number of lenses per inch of the lenticular plate and the number of dots per inch of the printer. The number of multi-viewpoint images N=DPI/LPI, where DPI is the number of dots per inch of the printer, and LPI is the number of dots per inch of the lenticular plate Number of lenses. For example, the number of lenses per inch of the microlens plate is 50, the number of dots per inch is 600, and the number of images required is 600/50=12, so 12 images are required to form a suitable 3D image. The position of the initial stereo image is determined by the following formula:
其中N为多视点图像的数目,D为初始立体图像的视差,d为所要生成的多视点图像的每个视点的视差。初始立体图像会插入到多视点图像的合适的位置。其他视点图像将由初始立体图像生成。这种方法将均匀分布多视点图像,即这些图像具有相似的视差,这种方法也会提高最终混合图像的质量。where N is the number of multi-viewpoint images, D is the disparity of the initial stereo image, and d is the disparity of each viewpoint of the multi-viewpoint image to be generated. The initial stereo image is inserted into the appropriate position of the multi-view image. Other viewpoint images will be generated from the initial stereo image. This method will evenly distribute the multi-view images, i.e. these images have similar disparity, this method will also improve the quality of the final blended image.
在确定所需要的图像的数目和所有图像的位置之后,系统会利用深度图来生成多视点图像。左眼图像和右眼图像的深度图在前面的部分中已经生成。这些基础图像,例如左眼图像或右眼图像会根据它们自己的深度图来移动像素。像素移动模块33用于移动基础图像的像素以形成新的图像。通常深度图的深度值从0到255,128为中间值,是基础图像的交汇点。为了由基础图像模拟左眼图像,深度值在128到255范围内的像素移动到右边,深度值在0到127范围内的像素移动到左边。为了由基础图像模拟左眼视图,深度值在128到255范围内的像素移动到左边,深度值在0到127范围内的像素移动到右边。从128到255,像素的深度值越高,像素移动的距离越大,从0到127,像素的深度值越小,像素移动的距离越大。下面是像素移动公式。After determining the number of images needed and the locations of all images, the system uses the depth map to generate multi-view images. The depth maps for the left and right eye images have been generated in the previous sections. These base images, such as the left eye image or the right eye image, shift pixels according to their own depth maps. The pixel shifting module 33 is used to shift the pixels of the base image to form a new image. Usually the depth value of the depth map is from 0 to 255, and 128 is the middle value, which is the intersection point of the base image. To simulate a left-eye image from the base image, pixels with depth values in the range 128 to 255 are shifted to the right, and pixels with depth values in the range 0 to 127 are shifted to the left. To simulate a left-eye view from the base image, pixels with depth values in the range 128 to 255 are shifted to the left, and pixels with depth values in the range 0 to 127 are shifted to the right. From 128 to 255, the higher the depth value of the pixel, the greater the distance the pixel moves. From 0 to 127, the smaller the depth value of the pixel, the greater the distance the pixel moves. Below is the pixel shift formula.
lx=x+parallax;rx=x-parallaxlx=x+parallax; rx=x-parallax
其中parallax为图像的带有深度信息的视差参数,lx为左眼图像像素的x轴坐标,rx为右眼图像像素的x轴坐标,新左眼图像(lx,y)处的像素为基础图像(x,y)处的像素,新右眼图像(rx,y)处的像素为基础图像(x,y)处的像素。在对像素进行合适的移动之后,会最终生成新的左眼图像和右眼图像。Where parallax is the parallax parameter with depth information of the image, lx is the x-axis coordinate of the left-eye image pixel, rx is the x-axis coordinate of the right-eye image pixel, and the pixel at the new left-eye image (lx, y) is the base image The pixel at (x,y), the pixel at (rx,y) of the new right eye image is the pixel at (x,y) of the base image. After appropriate shifting of the pixels, new left and right eye images are finally generated.
当系统生成新图像时,新图像会失去一些像素,处理这些失去的像素的过程称之为填洞,填洞模块34用于填充这些洞,这些因为像素移动产生的洞可以利用相邻的像素通过插值的方法进行填充,或者采用其他适宜的填洞方法进行填充。洞的像素值的插值法计算公式如下所示。When the system generates a new image, the new image will lose some pixels. The process of processing these lost pixels is called hole filling. The hole filling module 34 is used to fill these holes. These holes generated by pixel movement can utilize adjacent pixels Fill by interpolation method, or use other appropriate hole filling methods to fill. The calculation formula of the interpolation method for the pixel value of the hole is as follows.
lengeh=endx-startxlengthh=endx-startx
pixelvalue=((sourceImage(endx,y)-sourceImage(startx,y))×weight+sourceImage(startx,y)pixelvalue=((sourceImage(endx,y)-sourceImage(startx,y))×weight+sourceImage(startx,y)
其中,startx和endx分别是洞在这行中开始和结束的位置,length是洞的长度,holex是洞的x轴位置,weight是洞的权重值,pixelvalue是洞的像素值。在洞填充之后,新生成的视点图像已经准备好,可以进行下一步骤,多视点图像生成模块35可以根据初始图像和新生成的图像生成多视点图像。Among them, startx and endx are the start and end positions of the hole in this row, length is the length of the hole, holex is the x-axis position of the hole, weight is the weight value of the hole, and pixelvalue is the pixel value of the hole. After the holes are filled, the newly generated viewpoint images are ready, and the next step can be performed. The multi-viewpoint image generating module 35 can generate multi-viewpoint images according to the initial image and the newly generated images.
图像隔行扫描模块4用于对前面生成的多视点图像进行调整并形成混合图像,包括图像调整模块41、对比度调整模块42、图像合成模块43和混合图像输出模块44。图8为形成混合图像的的流程。本系统可以提高最终混合图像的质量。为了提高最终的混合图像的质量。本系统首先会将每幅图像调整到合适的宽度。图像调整模块41用于多视点图像的尺寸。以前面所说的12幅图像为例,最终打印的图像是600像素宽度每英寸,由于有12幅图像,每英寸每幅图像调整到600/12=50像素宽度每英寸,调整后的图像和初始图像具有相同的高度。如图11所示,12幅调整后的图像和初始图像具有相同的高度,但是具有不同的宽度。接着系统会提高这些调整后的图像的对比度。对比度调整模块42用来调整对比度。这两个过程可以增加最终合成图像的颜色细节。The image interlacing module 4 is used to adjust the previously generated multi-viewpoint images and form a mixed image, including an image adjustment module 41 , a contrast adjustment module 42 , an image synthesis module 43 and a mixed image output module 44 . FIG. 8 is a flowchart of forming a mixed image. This system can improve the quality of the final blended image. To improve the quality of the final blended image. The system first resizes each image to the appropriate width. The image adjustment module 41 is used for the size of the multi-view image. Take the aforementioned 12 images as an example, the final printed image has a width of 600 pixels per inch, since there are 12 images, each image per inch is adjusted to 600/12=50 pixels per inch, the adjusted image and The initial images have the same height. As shown in Figure 11, the 12 resized images have the same height as the original image, but have different widths. The system then increases the contrast of these adjusted images. The contrast adjustment module 42 is used to adjust the contrast. These two processes can increase the color detail of the final composite image.
最终形成的图像是12幅图像的混合图像,图像合成模块43用于形成混合图像。在本实施例中,将重构每英寸像素数为600的图像。为了与每英寸透镜数为50的微透镜板配合,混合图像会包括600/12=50条每英寸。每条包括12像素。图12为图像的混合的条的示意图。像素是按照12到1的顺序由12幅图像中提取的,通常右眼视图是这些条的第一排,即第12幅图像。在图12中,第一条是每幅图像的第一排的组合,以此类推,第二条是每幅图像的第二排的组合。The finally formed image is a mixed image of 12 images, and the image synthesis module 43 is used to form the mixed image. In this example, an image with 600 pixels per inch will be reconstructed. In order to cooperate with a lenticular plate with 50 LPI, the composite image will comprise 600/12=50 LPI. Each bar consists of 12 pixels. Figure 12 is a schematic illustration of a mixed strip of images. Pixels are extracted from 12 images in 12 to 1 order, usually the right eye view is the first row of these strips, the 12th image. In Figure 12, the first bar is the combination of the first row of each image, and so on, the second bar is the combination of the second row of each image.
实际上,大多数微透镜板都不具有理想的每英寸透镜数。例如,有时候,每英寸透镜数为50.1或49.9,而不是50。这会导致最终的3D图像失真。所以系统最后会将图像比例缩放以适合实际的微透镜板。例如,在理想情况下,微透镜板的每英寸透镜数为50,宽度为10英寸。图像的宽度为50X12X10=6000。然而,如果微透镜板的每英寸透镜数为50.1,宽度为10英寸,最终图像的宽度为5988,这可以通过以下公式计算In practice, most lenticular sheets do not have an ideal lens per inch count. For example, sometimes the LPI is 50.1 or 49.9 instead of 50. This causes distortion in the final 3D image. So the system ends up scaling the image to fit the actual lenticular plate. For example, ideally, a lenticular sheet would have 50 lenses per inch and be 10 inches wide. The width of the image is 50X12X10=6000. However, if the lenticular plate has a lens count of 50.1 and a width of 10 inches, the final image will have a width of 5988, which can be calculated by
其中LPItdeal为理想情况下微透镜版的每英寸透镜数,在该实例中其值为50,LPIactual为微透镜板的实际每英寸透镜数,在该实例中其值为50.1,Widthtdeal是图像在每英寸透镜数为50的情况下的理想宽度,其为6000,Widthactual是图像在每英寸透镜数实际为50.1的情况下的实际宽度,其为5988。混合图像输出模块44用于所形成的混合图像。Among them, LPI tdeal is the number of lenses per inch of the microlens plate under ideal conditions. In this example, its value is 50. LPI actual is the actual number of lenses per inch of the microlens plate. In this example, its value is 50.1, and Width tdeal is The ideal width of the image at 50 lenses per inch is 6000, and Width actual is the actual width of the image at 50.1 lenses per inch, which is 5988. The mixed image output module 44 is used for the formed mixed image.
混合图像可以与微透镜板结合以形成3D照片。图像可以直接打印在微透镜板上,将打印的图像复合在微透镜板上放入微透镜版相框,或者采用其他任何适宜的方法与微透镜板结合。The hybrid image can be combined with a lenticular plate to create a 3D photo. The image can be directly printed on the lenticular plate, the printed image can be composited on the lenticular plate and put into a lenticular plate photo frame, or combined with the lenticular plate by any other suitable method.
本发明的3D照片生成系统及方法,显著的简化了3D照片生成过程,提高了3D照片的质量。本发明的3D照片生成系统及方法利用立体图作为输入,现有的3D照相机和3D镜头可以作为立体图像的拍摄装置。采用图像处理技术由立体图像重构3D信息并提高3D照片质量。这可以非常快而有效的生成多视点图像并且可以提高生成图像的质量。为了进一步提高混合图像的质量,本发明的3D照片生成系统及方法首先调整多视点图像的大小,这会增加输出的混合图像的颜色细节。本发明的3D照片生成系统及方法可以广泛应用于各种主题公园、旅游景点以及照相馆。将会使更多的消费者享受3D照片带来的乐趣。The 3D photo generating system and method of the present invention significantly simplifies the 3D photo generating process and improves the quality of the 3D photo. The 3D photo generation system and method of the present invention use a stereogram as an input, and the existing 3D camera and 3D lens can be used as a stereoscopic image shooting device. Using image processing technology to reconstruct 3D information from stereoscopic images and improve the quality of 3D photos. This can generate multi-view images very quickly and efficiently and can improve the quality of the generated images. In order to further improve the quality of the mixed image, the 3D photo generation system and method of the present invention firstly adjust the size of the multi-view image, which will increase the color details of the output mixed image. The 3D photo generation system and method of the present invention can be widely used in various theme parks, tourist attractions and photo studios. Will make more consumers enjoy the fun brought by 3D photos.
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。Embodiments of the present invention have been described above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned specific implementations, and the above-mentioned specific implementations are only illustrative, rather than restrictive. Those of ordinary skill in the art will Under the enlightenment of the present invention, many forms can also be made without departing from the gist of the present invention and the protection scope of the claims, and these all belong to the protection of the present invention.
Claims (6)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201361761250P | 2013-02-06 | 2013-02-06 | |
| US61/761,250 | 2013-02-06 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN103974055A CN103974055A (en) | 2014-08-06 |
| CN103974055B true CN103974055B (en) | 2016-06-08 |
Family
ID=50896635
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410029673.XA Active CN103974055B (en) | 2013-02-06 | 2014-01-22 | 3D photo generation system and method |
Country Status (3)
| Country | Link |
|---|---|
| US (2) | US9270977B2 (en) |
| CN (1) | CN103974055B (en) |
| HK (2) | HK1189451A2 (en) |
Families Citing this family (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103974055B (en) * | 2013-02-06 | 2016-06-08 | 城市图像科技有限公司 | 3D photo generation system and method |
| CN104270627A (en) * | 2014-09-28 | 2015-01-07 | 联想(北京)有限公司 | Information processing method and first electronic equipment |
| US9773155B2 (en) | 2014-10-14 | 2017-09-26 | Microsoft Technology Licensing, Llc | Depth from time of flight camera |
| US12322071B2 (en) | 2015-03-21 | 2025-06-03 | Mine One Gmbh | Temporal de-noising |
| US12169944B2 (en) | 2015-03-21 | 2024-12-17 | Mine One Gmbh | Image reconstruction for virtual 3D |
| WO2018164852A1 (en) | 2017-02-22 | 2018-09-13 | Mine One Gmbh | Image reconstruction for virtual 3d |
| US10551913B2 (en) * | 2015-03-21 | 2020-02-04 | Mine One Gmbh | Virtual 3D methods, systems and software |
| US11550387B2 (en) * | 2015-03-21 | 2023-01-10 | Mine One Gmbh | Stereo correspondence search |
| US10853625B2 (en) | 2015-03-21 | 2020-12-01 | Mine One Gmbh | Facial signature methods, systems and software |
| US10373366B2 (en) | 2015-05-14 | 2019-08-06 | Qualcomm Incorporated | Three-dimensional model generation |
| US9911242B2 (en) | 2015-05-14 | 2018-03-06 | Qualcomm Incorporated | Three-dimensional model generation |
| US10304203B2 (en) * | 2015-05-14 | 2019-05-28 | Qualcomm Incorporated | Three-dimensional model generation |
| CN105100778A (en) * | 2015-08-31 | 2015-11-25 | 深圳凯澳斯科技有限公司 | Method and device for converting multi-view stereoscopic video |
| US10341568B2 (en) | 2016-10-10 | 2019-07-02 | Qualcomm Incorporated | User interface to assist three dimensional scanning of objects |
| CN109509146B (en) * | 2017-09-15 | 2023-03-24 | 腾讯科技(深圳)有限公司 | Image splicing method and device and storage medium |
| CN107580207A (en) * | 2017-10-31 | 2018-01-12 | 武汉华星光电技术有限公司 | The generation method and generating means of light field 3D display cell picture |
| CN109961395B (en) * | 2017-12-22 | 2022-10-11 | 展讯通信(上海)有限公司 | Method, device and system for generating and displaying depth image and readable medium |
| US10986325B2 (en) * | 2018-09-12 | 2021-04-20 | Nvidia Corporation | Scene flow estimation using shared features |
| TWI683136B (en) * | 2019-01-03 | 2020-01-21 | 宏碁股份有限公司 | Video see-through head mounted display and control method thereof |
| EP4214924B1 (en) * | 2020-09-21 | 2025-07-02 | LEIA Inc. | Multiview display system and method with adaptive background |
Family Cites Families (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7085409B2 (en) * | 2000-10-18 | 2006-08-01 | Sarnoff Corporation | Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery |
| AU2003292490A1 (en) * | 2003-01-17 | 2004-08-13 | Koninklijke Philips Electronics N.V. | Full depth map acquisition |
| US8384763B2 (en) * | 2005-07-26 | 2013-02-26 | Her Majesty the Queen in right of Canada as represented by the Minster of Industry, Through the Communications Research Centre Canada | Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging |
| JP2009505550A (en) * | 2005-08-17 | 2009-02-05 | エヌエックスピー ビー ヴィ | Video processing method and apparatus for depth extraction |
| US8326025B2 (en) * | 2006-09-04 | 2012-12-04 | Koninklijke Philips Electronics N.V. | Method for determining a depth map from images, device for determining a depth map |
| EP2087466B1 (en) * | 2006-11-21 | 2020-06-17 | Koninklijke Philips N.V. | Generation of depth map for an image |
| US8488868B2 (en) * | 2007-04-03 | 2013-07-16 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry, Through The Communications Research Centre Canada | Generation of a depth map from a monoscopic color image for rendering stereoscopic still and video images |
| US8248410B2 (en) * | 2008-12-09 | 2012-08-21 | Seiko Epson Corporation | Synthesizing detailed depth maps from images |
| KR101526866B1 (en) * | 2009-01-21 | 2015-06-10 | 삼성전자주식회사 | Depth Noise Filtering Method and Apparatus Using Depth Information |
| US8472746B2 (en) * | 2010-02-04 | 2013-06-25 | Sony Corporation | Fast depth map generation for 2D to 3D conversion |
| KR101114911B1 (en) * | 2010-04-14 | 2012-02-14 | 주식회사 엘지화학 | Stereoscopic image display device |
| WO2011129625A2 (en) * | 2010-04-14 | 2011-10-20 | (주)Lg화학 | Stereoscopic image display device |
| US8885890B2 (en) * | 2010-05-07 | 2014-11-11 | Microsoft Corporation | Depth map confidence filtering |
| US8406548B2 (en) * | 2011-02-28 | 2013-03-26 | Sony Corporation | Method and apparatus for performing a blur rendering process on an image |
| KR101792501B1 (en) * | 2011-03-16 | 2017-11-21 | 한국전자통신연구원 | Method and apparatus for feature-based stereo matching |
| EP2807827A4 (en) * | 2012-01-25 | 2015-03-04 | Lumenco Llc | Conversion of a digital stereo image into multiple views with parallax for 3d viewing without glasses |
| CN103974055B (en) * | 2013-02-06 | 2016-06-08 | 城市图像科技有限公司 | 3D photo generation system and method |
-
2014
- 2014-01-22 CN CN201410029673.XA patent/CN103974055B/en active Active
- 2014-01-23 HK HK14100717.6A patent/HK1189451A2/en not_active IP Right Cessation
- 2014-01-23 HK HK14103996.2A patent/HK1192107A2/en not_active IP Right Cessation
- 2014-02-04 US US14/172,888 patent/US9270977B2/en active Active
-
2016
- 2016-01-14 US US14/995,208 patent/US9544576B2/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN103974055A (en) | 2014-08-06 |
| US20140219551A1 (en) | 2014-08-07 |
| US9544576B2 (en) | 2017-01-10 |
| HK1192107A2 (en) | 2014-08-08 |
| US20160134859A1 (en) | 2016-05-12 |
| US9270977B2 (en) | 2016-02-23 |
| HK1200254A1 (en) | 2015-07-31 |
| HK1189451A2 (en) | 2014-06-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN103974055B (en) | 3D photo generation system and method | |
| US8300089B2 (en) | Stereoscopic depth mapping | |
| EP2306729B1 (en) | Video encoding device, video encoding method, video reproduction device, video recording medium, and video data stream | |
| US7983477B2 (en) | Method and apparatus for generating a stereoscopic image | |
| JP5186614B2 (en) | Image processing apparatus and image processing method | |
| KR20110124473A (en) | 3D image generating device and method for multi-view image | |
| KR20120053536A (en) | Image display device and image display method | |
| US20120069004A1 (en) | Image processing device and method, and stereoscopic image display device | |
| WO2006075325A1 (en) | Automatic conversion from monoscopic video to stereoscopic video | |
| US9258546B2 (en) | Three-dimensional imaging system and image reproducing method thereof | |
| Salahieh et al. | Light field retargeting from plenoptic camera to integral display | |
| CN107493465A (en) | A kind of virtual multi-view point video generation method | |
| CN101908233A (en) | Method and system for producing plural viewpoint picture for three-dimensional image reconstruction | |
| JP4267364B2 (en) | Stereoscopic image processing method | |
| JP2012138655A (en) | Image processing device and image processing method | |
| CN105430372B (en) | A kind of static integrated imaging method and system based on plane picture | |
| Ogawa et al. | Swinging 3d lamps: A projection technique to convert a static 2d picture to 3d using wiggle stereoscopy | |
| JP2012134885A (en) | Image processing system and image processing method | |
| HK1200254B (en) | 3d photo creation system and method | |
| JP5492311B2 (en) | Viewpoint image generation apparatus, viewpoint image generation method, and stereoscopic image printing apparatus | |
| JP2012142800A (en) | Image processing device, image processing method, and computer program | |
| JP2012213016A (en) | Stereoscopic image generation device and stereoscopic image generation method | |
| US10475233B2 (en) | System, method and software for converting images captured by a light field camera into three-dimensional images that appear to extend vertically above or in front of a display medium | |
| KR20110011047U (en) | A method of creating an auto-stereoscopic image with minimized pseudoscopic 3D effect | |
| Yamada et al. | Multimedia ambience communication based on actual moving pictures in a steroscopic projection display environment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1200254 Country of ref document: HK |
|
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: GR Ref document number: 1200254 Country of ref document: HK |