[go: up one dir, main page]

0% found this document useful (0 votes)
8 views23 pages

Lecture 6

The document discusses various types of Virtual Reality (VR) systems and their historical development, highlighting key innovations such as the Sensorama, the Ultimate Display, and the CAVE system. It also covers the basic components of VR technology, including rendering techniques for immersive applications and stereoscopic depth cues. Additionally, it outlines the applications of VR in fields like modeling, training, and entertainment.

Uploaded by

ayaalaakamal15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views23 pages

Lecture 6

The document discusses various types of Virtual Reality (VR) systems and their historical development, highlighting key innovations such as the Sensorama, the Ultimate Display, and the CAVE system. It also covers the basic components of VR technology, including rendering techniques for immersive applications and stereoscopic depth cues. Additionally, it outlines the applications of VR in fields like modeling, training, and entertainment.

Uploaded by

ayaalaakamal15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Virtual Reality

Dr. Eman Abdellatif


Lecture 4-5
Agenda

VR Types
VR types
Sensorama Grope
in years 1960-1962 Morton Heilig created a he first prototype of a force-feedback system
multi-sensory simulator. A prerecorded film in realized at the University of
color and stereo, was augmented by binaural North Carolina (UNC) in 1971.
sound, scent, wind and vibration experiences.
This was the first approach to create a virtual
reality system and it had all the features of
Video place
such an environment, but it was not Artificial Reality created in 1975 by Myron Krueger – “a conceptual
interactive. environment, with no existence”. In this system the silhouettes of the users
grabbed by the cameras were projected on a large screen. The participants
The Ultimate Display were able to interact one with the other thanks to the image processing
techniques that determined their positions in
in 1965 Ivan Sutherland proposed the
2D screen’s space.
ultimate solution of virtual reality: an artificial
world construction concept that included
interactive graphics, force-feedback, sound,
smell and taste
Vcass
“The Sword of Damocles” homas Furness at the US Air Force’s Armstrong Medical Research
Laboratories developed in 1982 the Visually Coupled Airborne Systems
the first virtual reality system realized in Simulator – an advanced flight simulator. The fighter pilot wore a HMD that
hardware, not in concept. Ivan Sutherland augmented the out-the- window view by the graphics describing targeting or
constructs a device considered as the first Head optimal flight path information.
Mounted Display (HMD), with appropriate head
tracking. It supported a stereo view that was
updated correctly according to the user’s head
position and orientation.
VR types
UNC Walkthrough project
n the second half of 1980s at the University of North
Carolina an architectural walkthrough application was developed.
Several VR devices
Vived were constructed to improve the quality of this system like: HMDs,
Virtual Visual Environment Display – optical trackers and
constructed at the NASA Ames in 1984 the Pixel-Plane graphics engine
with off-the-shelf technology a stereoscopic
monochrome HMD.
Virtual Wind Tunnel
developed in early 1990s at the NASA Ames application that
allowed the observation and investigation of flow-fields with the help of
BOOM and
DataGlove
VPL
the VPL company manufactures the popular
DataGlove (1985) and the Eyephone Cave
HMD (1988) – the first commercially resented in 1992 CAVE (CAVE Automatic Virtual Environment) is a virtual
available VR devices. reality and scientific visualization system. Instead of using a HMD it projects stereoscopic
images on the walls of room (user must wear LCD shutter glasses). This approach
assures superior quality and resolution of viewed images, and wider field of view in
comparison to HMD based systems
BOOM
commercialized in 1989 by the Fake Space Labs.
BOOM is a small box
Augmented Reality (AR)
containing two CRT monitors that can be viewed technology that “presents a virtual world that enriches, rather than replaces the real world
through the eye holes. The user can This is achieved by means of see-through HMD that superimposes virtual three-dimensio
grab the box, keep it by the eyes and move objects on real ones. This technology was previously used to enrich fighter pilot’s view w
through the virtual world, as the mechanical
arm measures the position and orientation of the
additional flight information (VCASS). Thanks to its great potential – the enhancement o
box human vision – augmented reality became a focus of many research projects in early 1990
8- VPL (Visual Programming Lab Research )
the VPL company manufactures the popular DataGlove (1985) and the
Eyephone HMD (1988) – the first commercially available VR devices.
They developed a range of VR
equipment, such as, the
DataGlove, EyePhone HMD
and the Audio Sphere
8- VPL
A VPL Research DataSuit, a full-body outfit with sensors
for measuring the movement of arms, legs, and trunk.
Developed circa 1989. Displayed at the Nissho Iwai
showroom in Tokyo
DataGlove, sold alongside the headset

VPL EyePhone
8- VPL

Demonstration of
Eyephone HMD by VPL
9- BOOM

commercialized in 1989 by the Fake Space Labs. BOOM is a small


box containing two CRT monitors that can be viewed through the eye
holes. The user can grab the box, keep it by the eyes and move
through the virtual world, as the mechanical arm measures the
position and orientation of the box.
10- UNC Walkthrough project

in the second half of 1980s at the University of North Carolina an


architectural walkthrough application was developed. Several VR
devices were constructed to improve the quality of this system like:
HMDs, optical trackers and the Pixel-Plane graphics engine.
11- Virtual Wind Tunnel

developed in early 1990s at the NASA Ames application that


allowed the observation and investigation of flow-fields with the help
of BOOM and DataGlove
12- CAVE

presented in 1992 CAVE (CAVE Automatic Virtual Environment) is a


virtual reality and scientific visualization system. Instead of using a
HMD it projects stereoscopic images on the walls of room (user must
wear LCD shutter glasses). This approach assures superior quality and
resolution of viewed images, and wider field of view in comparison to
HMD based systems
13- Augmented Reality (AR)

a technology that “presents a virtual world that enriches,


rather than replaces the real world” .This is achieved by means of see-
through HMD that superimposes virtual three-dimensional objects on
real ones. This technology was previously used to enrich fighter pilot’s
view with additional flight information (VCASS). Thanks to its great
potential – the enhancement of human vision – augmented reality
became a focus of many research projects in early 1990s
VR technology
Basic components of VR immersive application.
Rendering of virtual worlds
Coordinate system transformations for virtual reality
Rendering of virtual worlds

The visual display transformation for VR is much more complex than in standard computer
graphics. On one hand, there is a hierarchical scene database containing a number of objects
(like in standard computer graphics) and on the other hand is the user controlling the virtual
camera by moving his/her head, flying through the world, or manipulating it (e.g., scaling). To
provide a proper view of the scene, all these components are to be taken into consideration. The
determination of viewing parameters involves the calculation of a series of transformations
between coordinate systems (CS) that depend on hardware setup, user’s head position and state
of input . This section describes how to calculate display transformations for rendering of
monoscopic images.
Rendering of virtual worlds
To render the images we must know where the camera in the virtual world is. Therefore following
transformations must be calculated:
• Eye-In-Sensor – defines the position of the eye (virtual camera) in the tracker’s sensor CS. These
transformations (for left and right eye) are fixed for a given HMD geometry (different HMDs can have
tracker’s sensors mounted differently).
• Sensor-In-Emitter – defines position and orientation of the sensor in the tracker’s emitter CS. This
transformation changes dynamically as the user moves or rotates his/her head and is measured by the
tracking device.
• Emitter-In-Room – defines position and orientation of the tracking system in the physical room it is placed
in. This transformation is fixed for the given physical tracking system setup in room.
• Room-In-World – defines position and orientation of the room (or user controlled vehicle) in the world CS.
This transformation changes dynamically according to the user’s actions like flying, tilting or world scaling.
To resolve the final viewing transformation (from the object CS into the screen CS) we must additionally
take into account the viewing perspective projection and Object-In-World transformations like in standard
computer graphics.
Rendering of virtual worlds
Stereoscopy
Our two eyes allow us to see three-
dimensionally. Stereo vision relies on
additional depth-cues like eye
convergence and stereopsis based on
retinal disparity and therefore may greatly
increase the feeling of immersion.

For stereo perception in computer graphics we


must generate proper pairs of images (stereo
pairs). There are two kinds of VR systems that
require different visual display transformations
in order to produce the proper stereoscopic
images – Fish Tank VR and immersive
systems like next slide in this ppt
Stereoscopic depth-cues: (a) eye convergence, (b) retinal disparity
Rendering of virtual worlds
• Fish Tank VR – these systems use a standard desktop monitor to present the images. The user is equipped
with a head tracker and appropriate 3D glasses Stereoscopic images are mainly created with the help of
the off-axis projection.
This method uses two asymmetrical projections that are not centered on the main projection axis
Alternatively a crossed-axis projection can be applied. In this approach the angle of convergence at the
viewed point is used as the rotational angle for the scene .The rotation is easy to implement, but it can
produce divergent and vertical parallaxes
(parallax is the distance of homologous points on the screen).
A divergent parallax can occur when there are points that are far away from the center of rotation. Their
parallax will get bigger the further they are away, so the parallax is unbounded. Vertical parallax occurs when
an object is rotated under perspective projection .These artifacts can greatly affect image quality.

• immersive systems – these systems use a HMD to present images to the user. They use the on-axis
projection . For a generation of stereo images this method uses parallel viewing rays for each of the eyes
and the perspective projection.
Rendering of virtual worlds

Two centers of projection transformations:


(a) off-axis projection, (b) on-axis projection
Rendering of virtual worlds

Both of the two centers of projection methods (off-axis and on-axis) generate a proper
perspective but have the disadvantage that they create regions where only monoscopic information is
present (regions that are seen by only one of the eyes).

For the design of a stereoscopic system special considerations are to be taken into account,
because even very little distortion (caused by optics geometry or incorrect transformations) may hinder the
fusing of proper three-dimensional images

Crossed-axes projection: (a) eye convergence, (b) scene rotation


Rendering of virtual worlds

Crossed-axes projection: (a) eye convergence, (b) scene rotation


Applications of VR

Motivation to use VR Modeling, designing and planning

Telepresence and teleoperating

Data and architectural visualization

Cooperative working

Training and education

Entertainment
Thank you

You might also like