[go: up one dir, main page]

HK1171887A - Three-dimensional display with motion parallax - Google Patents

Three-dimensional display with motion parallax Download PDF

Info

Publication number
HK1171887A
HK1171887A HK12112500.4A HK12112500A HK1171887A HK 1171887 A HK1171887 A HK 1171887A HK 12112500 A HK12112500 A HK 12112500A HK 1171887 A HK1171887 A HK 1171887A
Authority
HK
Hong Kong
Prior art keywords
viewer
eye
images
display
tracking
Prior art date
Application number
HK12112500.4A
Other languages
Chinese (zh)
Inventor
C.休特玛
E.莱恩
E.萨尔尼科夫
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 微软技术许可有限责任公司 filed Critical 微软技术许可有限责任公司
Publication of HK1171887A publication Critical patent/HK1171887A/en

Links

Description

Three-dimensional display with motion parallax
Technical Field
The present invention relates to three-dimensional displays.
Background
The human brain derives its three-dimensional (3D) cues in various ways. One of these approaches is via stereo vision, which corresponds to the difference between the viewed images presented to the left and right eyes. Another way is by motion parallax, which corresponds to the way the viewer sees the scene as the viewing angle changes (e.g., as the viewer's head moves).
Current 3D displays are based on stereo vision. Generally, 3D televisions and other displays output separate video frames to each eye via 3D glasses or glasses with lenses that block some frames and let others pass. Examples include using different colors for left and right images with corresponding filters for the glasses, using light polarization and corresponding different polarizations for the left and right images, and using shutters in the glasses. The brain combines the frames in such a way that the viewer experiences 3D depth as a result of the stereo cues.
Current technology allows different frames to be directed to each eye without the use of glasses, achieving the same result. Such displays are designed to present different views from different angles, typically by arranging the pixels of the screen between some kind of optical barrier or optical lens.
Three-dimensional display technology performs well with the viewer's head mostly stationary. However, the view does not change when the viewer's head moves, and thus the stereo cues contradict motion parallax. This conflict causes some viewers to experience fatigue and discomfort when viewing content on the 3D display.
Disclosure of Invention
This summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, aspects of the present invention relate to a hybrid stereoscopic image/motion parallax technique that uses a stereoscopic 3D vision technique to present different images to each eye of a viewer, in conjunction with a motion parallax technique to adjust the presentation or acquisition of each image to the position of the viewer's eyes. In this way, in the event that a viewer moves while viewing a 3D scene, the viewer receives both stereoscopic cues and parallax cues.
In one aspect, left and right images are captured by a stereo camera and received and processed for motion parallax adjustment according to position sensor data corresponding to a current viewer position. These adjusted images are then output to the left and right eyes of the viewer, respectively, for separate left and right displays. Alternatively, the current viewer position may be used to acquire an image of the scene, for example by correspondingly moving the robotic stereo camera. The invention is also applicable to multiple viewers watching the same scene, including on the same screen with independent views being tracked and presented independently.
In one aspect, viewer head and/or eye positions are tracked. Note that the eye position of each eye may be tracked directly or estimated from head tracking data, which may include the head position in 3D space plus the gaze direction (and/or rotation, and possibly more (e.g., tilt)) of the head and thus provide data corresponding to the position of each eye. Thus, "position data" includes the concept of the position of each eye, regardless of how it is acquired, e.g., directly or via estimation from head position data.
Glasses with sensors or emitters may be used for tracking, including the same 3D filtering glasses that use lenses or shutters to pass/block different images to the eye; (Note that as used herein, a "shutter" is a type of filter, i.e., a timing filter). Alternatively, computer vision may be used to track head or eye position, particularly for glasses-free 3D display technologies. However, the computer vision system may be trained to track the position of the glasses or the position of one or more lenses of the glasses.
Tracking the current viewer position corresponding to each eye also allows images to be acquired or adjusted based on both horizontal and vertical parallax. Thus, for example, tilt, viewing height, and head rotation/tilt data may also be used to adjust or acquire images, or to both adjust and acquire images.
Other advantages of the present invention will become apparent from the following detailed description when read in conjunction with the accompanying drawings.
Drawings
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or like elements and in which:
fig. 1 is a representation of a viewer viewing a stereoscopic display in which a stereoscopic camera provides a left stereoscopic image and a right stereoscopic image.
Fig. 2 is a representation of a viewer viewing a stereoscopic display, where left and right cameras provide left and right stereoscopic images, and motion parallax processing adjusts the presentation of each image based on the viewer's current left and right eye positions.
Fig. 3 is a flowchart showing exemplary steps for performing motion parallax processing on separate left and right images.
FIG. 4 is a block diagram representing an exemplary non-limiting computing system or operating environment in which one or more aspects of various embodiments described herein can be implemented.
Detailed Description
Aspects of the present invention generally relate to a hybrid stereoscopic image/motion parallax system that uses stereoscopic 3D vision techniques to present different images to each eye, in conjunction with motion parallax techniques to adjust left and right images to the position of the viewer's eyes. In this way, when the viewer moves while viewing the 3D scene, the viewer receives both stereoscopic cues and parallax cues, which tend to cause greater visual comfort/less fatigue to the viewer. To this end, the position of each eye (or spectacle lens, as described below) may be tracked directly or via estimation. A perspective projection computed from the viewer's viewpoint is used to render a 3D image of the scene to each eye in real time, providing parallax cues.
It should be understood that any of the examples herein are non-limiting. Thus, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in display technology in general.
Fig. 1 is a diagram of a viewer 100 viewing a representation of a 3D scene 102 captured by left and right stereo cameras 106 shown on a 3D stereo display 104. In fig. 1, the eyes of the viewer may be assumed to be in a starting position (zero motion parallax). Note that each of the objects in the scene 102 is represented as appearing to come out of the display to indicate that the scene is showing separate left and right images perceived by the viewer 100 as 3D.
Fig. 2 is a representation of the same viewer 100 viewing the same 3D scene 102 captured by left and right stereo cameras 106 through a 3D stereo display 104; however, in fig. 2, the viewer has moved relative to fig. 1. Example movements include vertical and/or horizontal movements, rotation of the head, pitch and/or tilt of the head. As such, the eye positions sensed or estimated from the data of the position sensor/eye tracking sensor 110 (e.g., estimated from head position data (which may include 3D position, rotation, orientation, tilt, etc.)) are different from one another. Examples of such position sensors/eye tracking sensors are described below.
As is known in single image ("single") parallax scenes, the image captured by the camera can be adjusted by relatively straightforward geometric calculations to match the approximate head position of the viewer, and thus the horizontal viewing angle. For example, camera and computer vision algorithm based head tracking systems have been used to achieve "single 3D" effects as explained in Cha Zhang, Zhaozheng Yin and Dinei Flor E, the "Improving depth perception with Motion parallel and Its Application in Telecommunications" (improved depth perception using Motion Parallax and Its Application in Teleconferencing) ", 10.10.5.7.2009, MMSP09 conference, http:// research. In such single-parallax scenes, there is essentially a "virtual" camera that appears to move within the scene, seen as the viewer's head moving horizontally. However, no such known technique is effective for the left and right images separately, and thus stereoscopic images are not conceived. Furthermore, head tilt, viewing height and/or head rotation do not change the viewed image.
Rather than a virtual camera, it is understood that the camera of fig. 1 is a stereo robotic camera that moves in a real environment to capture a scene from different angles, such as by moving to the same position/orientation as the virtual camera 206 of fig. 2. Another alternative is to adjust a prerecorded single stereo video, or interpolate videos from multiple stereo cameras capturing/recording 3D scenes from various angles. As such, three-dimensional displays with the motion parallax techniques described herein work, in part, by obtaining and/or adjusting left and right images based on sensed viewer position data.
As described herein, motion parallax processing is performed by the motion parallax processing component 112 on the left and right images, respectively, to provide parallax adjusted left and right images 114 and 115, respectively. Note that it is feasible to estimate the positions of both eyes from head (or single eye) position data, however this cannot be adjusted for head tilt, pitch and/or head gaze rotation/direction unless more head related information than the general position of the head alone is sensed and provided as data to the motion parallax processing component. Thus, the sensed position data may also include head tilt, pitch, and/or head roll data.
Thus, as generally represented in fig. 2, the virtual left and right (stereo) cameras 206 effectively move, rotate, and/or tilt with the position of the viewer. This can also be done for the processed image of the robot camera or cameras. The viewer thus sees the 3D scene via left and right stereo images 214 and 215, respectively, each adjusted for disparity compensation. Note that the objects shown in fig. 2 are intended to represent the same objects shown from a different viewpoint than fig. 1, but this is for illustrative purposes only, and in the drawings, the relative sizes and/or viewpoints are not intended to be mathematically exact.
In general, as generally represented in fig. 1 and 2, the position sensor/eye sensor 110 evaluates the position of the viewer 100 relative to the display. The viewer's position is used to drive a set of left and right virtual cameras 206 so that the scene is effectively viewed from the viewer's virtual position in the 3D scene. The virtual camera 206 captures two images corresponding to a left eye view and a right eye view. The two images are presented by a stereoscopic display, providing a 3D view to the viewer 100.
As the viewer 110 moves, the viewer's position is tracked in real time and converted to corresponding changes in both the left image 214 and the right image 215. This results in an immersive 3D experience that combines both stereo cues and motion parallax cues.
Turning to aspects related to position/eye tracking, such tracking may be implemented in various ways. One approach includes multi-purpose glasses that combine a stereo filter and a head tracking device (e.g., implemented as a sensor or emitter in the glasses support). Note that various glasses configured to output signals for head tracking, such as including a transmitter (e.g., infrared) that is detected and triangulated, are known in the art. Magnetic sensing is another known alternative.
Yet another alternative is to use a head tracking system based on cameras and computer vision algorithms. Directing lightAutostereoscopic (Autostereoscopic) displays to individual eyes and thus capable of providing separate left and right image views for use as a 3D effect are described in us patent application nos. 12/819,238, 12/819,239 and 12/824,257, which are incorporated herein by reference. In one implementation, Microsoft corporation's KinectTMTechniques have been used for head tracking/eye tracking.
Generally, computer vision algorithms for eye tracking use models based on analysis of multiple images of a human head. Standard systems may be used with displays that do not require glasses. However, practical problems occur when the viewer wears glasses, as the glasses cover the eyes and thus many existing face tracking mechanisms fail. To overcome this problem, in one implementation, a face tracking system is trained using a set of images of the person wearing the glasses (either as an alternative to or in addition to training with images of a normal face). Indeed, the system may be trained with a set of images of a person wearing particular glasses used by a particular 3D system. This results in very efficient tracking, as the glasses tend to stand out as very recognizable objects in the training data. In this way, a computer vision based eye tracking system can be adjusted to account for the presence of the glasses.
Fig. 3 is a flow diagram of example steps of a motion disparity processing mechanism configured to separately compute a left image and a right image. As shown in step 302, the method receives left eye position data and right eye position data from a position/eye tracking sensor. As described above, the head position data may alternatively be provided and used for the parallax calculation, including by converting the head position data into left eye position data and right eye position data.
Step 304 represents calculating a disparity adjustment based on the geometry of the viewer's left eye position. Step 306 represents calculating a disparity adjustment based on the geometry of the viewer's right eye position. Note that it is feasible to use the same calculations for both eyes, such as in the case where rotation and/or tilt is obtained as head position data and not taken into account, since stereo camera separation already provides some (fixed) disparity difference. However, even a small distance of about two inches between the two eyes causes parallax and a difference in the resulting viewer's feeling, including when turning/tilting the head, etc.
Steps 308 and 310 represent adjusting each image based on the parallax projection calculation. Step 312 outputs the adjusted image to a display device. Note that this may be in the form of a conventional signal provided to a conventional 3D display device, or may be separate left and right signals provided to a display device configured to receive separate images. Indeed, the invention may include the motion parallax processing component 112 (and possibly one or more sensors 110), for example, in the display device itself, or may include the motion parallax processing component 112 in the camera.
Step 314 repeats the process, for example, for each left and right frame (or set of frames/duration, since the viewer will only move too fast). Note that alternatives are possible, such as left image disparity adjustment and output alternating with right image disparity adjustment and output, e.g., the steps of fig. 3 do not have to occur in the order shown. Also, for example, rather than refreshing each frame or set of frames/durations, a threshold amount of movement may be detected to trigger a new disparity adjustment. Such less frequent disparity adjustment processing may be desirable in a multi-viewer environment so that computing resources may be distributed among multiple viewers.
Indeed, although the present invention has been described with reference to a single viewer, it will be appreciated that multiple viewers of the same display may each receive his or her own parallax adjusted stereoscopic images. Displays that can direct different left and right images to the eyes of multiple viewers are known (e.g., as described in the above-mentioned patent application), and thus multiple viewers can simultaneously view the same 3D scene with separate stereoscopic and parallax-adjusted left and right views, as long as the processing power is sufficient to sense the positions of the multiple viewers and perform parallax adjustment.
As can be seen, a hybrid 3D video system that combines stereoscopic display with dynamic compositing of left and right images to enable motion parallax presentation is described herein. This may be achieved by inserting a position sensor in motion parallax glasses (including motion parallax glasses with separate filter lenses), and/or by computer vision algorithms for eye tracking. The head tracking software may be adjusted to account for the viewer wearing the glasses.
The hybrid 3D system may be applied to video and/or graphics applications that display 3D scenes, and thus allow a viewer to physically or otherwise navigate portions of a stereoscopic image. For example, the displayed 3D scene may correspond to a video game, a 3D teleconference, and a data representation.
Furthermore, the present invention overcomes the significant drawback of current display technologies that only consider horizontal parallax, by also adjusting vertical parallax (assuming shutter glasses are used, or the display is capable of directing light horizontally and vertically, as opposed to some lenses or other glasses-free technologies that only produce horizontal parallax). The separate eye tracking/head sensing described herein can correct for parallax for any head position (e.g., side-skewed by a few degrees).
Exemplary computing device
The techniques described herein may be applied to any device. It should be understood, therefore, that handheld, portable and other computing devices and computing objects of all kinds are contemplated for use in connection with the various embodiments. Accordingly, the below general purpose remote computer described below in fig. 4 is but one example of a computing device, as configured to receive sensor outputs and perform image disparity adjustment as described above.
Embodiments may be implemented in part via an operating system, for use by a developer of services for a device or object, and/or included within application software that operates to perform one or more functional aspects of the embodiments described herein. Software may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers, such as client workstations, servers or other devices. Those skilled in the art will appreciate that computer systems have a variety of configurations and protocols that can be used to communicate data, and thus no particular configuration or protocol should be considered limiting.
FIG. 4 thus illustrates an example of a suitable computing system environment 400 in which one or aspects of the embodiments described herein can be implemented, although as made clear above, the computing system environment 400 is only one example of a suitable computing environment and is not intended to suggest any limitation as to scope of use or functionality. Further, the computing system environment 400 is not intended to be interpreted as having any dependency relating to any one or combination of components illustrated in the exemplary computing system environment 400.
With reference to FIG. 4, an exemplary remote device for implementing one or more embodiments includes a general purpose computing device in the form of a computer 410. Components of computer 410 may include, but are not limited to, a processing unit 420, a system memory 430, and a system bus 422 that couples various system components including the system memory to the processing unit 420.
Computer 410 typically includes a variety of computer readable media and can be any available media that can be accessed by computer 410. The system memory 430 may include computer storage media in the form of volatile and/or nonvolatile memory such as Read Only Memory (ROM) and/or Random Access Memory (RAM). By way of example, and not limitation, system memory 430 may also include an operating system, application programs, other program modules, and program data.
A viewer may enter commands and information into the computer 440 through input device 410. A monitor or other type of display device is also connected to the system bus 422 via an interface, such as output interface 450. In addition to a monitor, computers can also include other peripheral output devices such as speakers and a printer, which may be connected through output interface 450.
The computer 410 may operate in a networked or distributed environment using logical connections to one or more other remote computers, such as a remote computer 470. The remote computer 470 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, or any other remote media consumption or transmission device, and may include any or all of the elements described above relative to the computer 410. The logical connections depicted in FIG. 4 include a network 472, such Local Area Network (LAN) or a Wide Area Network (WAN), but may also include other networks/buses. Such networking environments are commonplace in homes, offices, enterprise-wide computer networks, intranets and the Internet.
As mentioned above, while exemplary embodiments have been described in connection with various computing devices and network architectures, the underlying concepts may be applied to any network system and any computing device or system in which it is desirable to improve the efficiency of resource usage.
Moreover, there are numerous ways of implementing the same or similar functionality, e.g., an appropriate API, tool kit, driver code, operating system, control, standalone or downloadable software object, etc., which enables applications and services to use the techniques provided herein. Thus, embodiments herein are contemplated from the standpoint of an API (or other software object), as well as from a software or hardware object that implements one or more embodiments as described herein. Thus, various embodiments described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software.
The word "exemplary" is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited to these examples. Additionally, any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to exclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms "includes," "has," "includes," and other similar words are used, for the avoidance of doubt, such terms are intended to be inclusive in a manner similar to the term "comprising" as an open transition word without precluding any additional or other elements from being utilized in a claim.
As noted, the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. As used herein, the terms "component," "module," "system," and the like are likewise intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computer and the computer can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers.
The system as described above has been described with reference to interaction between several components. It will be understood that these systems and components may include components or specified sub-components, some specified components or sub-components, and/or additional components, and according to various permutations and combinations of the above. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
In view of the exemplary systems described herein, methodologies that may be implemented in accordance with the described subject matter can also be understood from the flow charts with reference to the figures. While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the embodiments are not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Although non-sequential or branched flow is illustrated via flowchart, it can be appreciated that various other branches, flow paths, and orders of the blocks, may be implemented which achieve the same or similar result. Moreover, some illustrated blocks may be optional in implementing the methodologies described hereinafter.
Final phrase
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.
In addition to the embodiments described herein, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiments for performing the same or equivalent function of the corresponding embodiments without deviating therefrom. Further, multiple processing chips or multiple devices may share the performance of one or more functions described herein, and similarly, storage may be effected across multiple devices. Accordingly, the present invention should not be limited to any single embodiment, but rather should be construed in breadth, spirit and scope in accordance with the appended claims.

Claims (20)

1. In a computing environment, a method executed at least in part on at least one processor, comprising:
(a) receiving sensed location data corresponding to a current viewer location;
(b) using the position data to adjust and/or acquire a left eye image from a scene to account for disparity corresponding to the current viewer position and a right eye image to account for disparity corresponding to the current viewer position;
(c) outputting the left image for display to a left eye of the viewer;
(d) outputting the right image for display to a right eye of the viewer;
(e) returning to step (a) to provide the viewer with a motion parallax adjusted stereoscopic representation of the scene.
2. The method of claim 1, further comprising tracking viewer head position to provide at least a portion of the sensed position data.
3. The method of claim 2, wherein tracking the viewer's head position comprises sensing the head position based on one or more sensors attached to glasses, wherein the glasses comprise a lens configured for stereoscopic viewing.
4. The method of claim 2, wherein tracking the viewer's head position comprises sensing the head position based on one or more emitters attached to glasses, wherein the glasses comprise a lens configured for stereoscopic viewing.
5. The method of claim 2, wherein tracking the viewer head position comprises executing a computer vision algorithm.
6. The method of claim 2, further comprising training the computer vision algorithm using a data set corresponding to a person wearing glasses.
7. The method of claim 1, further comprising tracking viewer eye position or viewer eye position, rotation, and gaze direction.
8. The method of claim 7, wherein tracking the viewer's head position comprises executing a computer vision algorithm.
9. The method of claim 1, further comprising tracking viewer eye positions separately for left and right eyes.
10. The method of claim 1, further comprising tracking viewer eyeglass lens positions.
11. The method of claim 10, further comprising training the computer vision algorithm using a data set corresponding to a person wearing glasses.
12. The method of claim 1, wherein using the position data comprises adjusting the left image for horizontal and vertical position, rotation, pitch, and tilt, and adjusting the right image for horizontal and vertical position, rotation, pitch, and tilt.
13. The method of claim 1, further comprising:
(i) receiving sensed location data corresponding to a current other viewer location;
(ii) using the position data to adjust and/or acquire a left eye image from a scene to account for disparities corresponding to the current other viewer position and a right eye image to account for disparities corresponding to the current other viewer position;
(iii) outputting the left image for display to the left eye of the other viewer;
(iv) outputting the right image for display to a right eye of the other viewer;
(v) (ii) returning to step (i) to provide the adjusted motion parallax stereoscopic representation of the scene to the other viewers.
14. In a computing environment, a system comprising a position tracking device configured to output position data corresponding to a viewer position, a motion parallax processing component configured to receive position data from the motion tracking device and left and right image data from a stereo camera, the motion parallax processing component further configured to adjust the left image data based on the position data and adjust the right image data based on the position data and output corresponding adjusted left and right image data to a display device.
15. The system of claim 14, wherein the position tracking device tracks the position of the viewer's head.
16. The system of claim 14, wherein the position tracking device tracks the position of at least one of the viewer's eyes.
17. The system of claim 14, wherein the location tracking device tracks a location of glasses worn by the viewer or a location of at least one of the lenses of the glasses.
18. One or more computer-readable media having computer-executable instructions that, when executed, perform steps comprising:
receiving a series of left images, at least some of which are adjusted for motion parallax;
outputting the series of left images for display to a left eye of the viewer;
receiving a series of right images, at least some of which are adjusted for motion parallax; and
outputting the series of right images for display to a right eye of the viewer;
19. one or more computer-readable media as recited in claim 18, wherein outputting the series of left images for display to the left eye of the viewer comprises configuring the series of left images to pass through a filter in front of the left eye of the viewer and to be blocked by a filter in front of the right eye of the viewer, and wherein outputting the series of right images for display to the right eye of the viewer comprises configuring the series of right images to pass through a filter in front of the right eye of the viewer and to be blocked by a filter in front of the left eye of the viewer.
20. One or more computer-readable media as recited in claim 18, wherein outputting the series of left images for display to the left eye of the viewer comprises orienting the left images to a calculated or sensed left eye position, and wherein outputting the series of right images for display to the right eye of the viewer comprises orienting the right images to a calculated or sensed right eye position.
HK12112500.4A 2011-02-08 2012-12-04 Three-dimensional display with motion parallax HK1171887A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/022,787 2011-02-08

Publications (1)

Publication Number Publication Date
HK1171887A true HK1171887A (en) 2013-04-05

Family

ID=

Similar Documents

Publication Publication Date Title
US20120200676A1 (en) Three-Dimensional Display with Motion Parallax
JP7596303B2 (en) Head-mounted display with pass-through image processing
US9451242B2 (en) Apparatus for adjusting displayed picture, display apparatus and display method
US9225973B2 (en) Image processing apparatus, image processing method, and image communication system
JP5732888B2 (en) Display device and display method
US20110228051A1 (en) Stereoscopic Viewing Comfort Through Gaze Estimation
TWI558164B (en) Method and apparatus for generating a signal for a display
CN103069821B (en) Image display device, method for displaying image and image correcting method
US20170126988A1 (en) Generating stereoscopic pairs of images from a single lens camera
US11196972B2 (en) Presence camera
KR101315612B1 (en) 2d quality enhancer in polarized 3d systems for 2d-3d co-existence
JP6207640B2 (en) 2D image stereoscopic display device
US20140347451A1 (en) Depth Adaptation for Multi-View System
HK1171887A (en) Three-dimensional display with motion parallax
KR102306775B1 (en) Method and apparatus for displaying a 3-dimensional image adapting user interaction information
US20200252585A1 (en) Systems, Algorithms, and Designs for See-through Experiences With Wide-Angle Cameras
KR20190016139A (en) Stereoscopic video generation
KR101907127B1 (en) Stereoscopic video zooming and foreground and background detection in a video
KR101939243B1 (en) Stereoscopic depth adjustment and focus point adjustment
Joachimiak et al. View Synthesis with Kinect-Based Tracking for Motion Parallax Depth Cue on a 2D Display
Chen et al. A view-dependent stereoscopic system using depth-image-based tracking
Wang et al. Object-Based Stereo Panorama Disparity Adjusting
Kim et al. Ego motion induced visual discomfort of stereoscopic video