[go: up one dir, main page]

0% found this document useful (0 votes)
43 views30 pages

Unit 5

The document discusses methods for addressing the hidden surface problem in computer graphics, focusing on techniques such as the Depth Buffer (Z-Buffer) Method, Scan-Line Method, Area-Subdivision Method, Back-Face Detection, A-Buffer Method, and Depth Sorting Method. It also covers basic illumination models and shading techniques, including Constant Intensity Shading and Gouraud Shading, which are used to calculate light intensity and render surfaces realistically. Each method has its advantages and disadvantages, impacting memory usage and processing speed.

Uploaded by

Shruti Tripathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views30 pages

Unit 5

The document discusses methods for addressing the hidden surface problem in computer graphics, focusing on techniques such as the Depth Buffer (Z-Buffer) Method, Scan-Line Method, Area-Subdivision Method, Back-Face Detection, A-Buffer Method, and Depth Sorting Method. It also covers basic illumination models and shading techniques, including Constant Intensity Shading and Gouraud Shading, which are used to calculate light intensity and render surfaces realistically. Each method has its advantages and disadvantages, impacting memory usage and processing speed.

Uploaded by

Shruti Tripathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT- 5

Hidden lines and surfaces

When we view a picture containing non-transparent objects


and surfaces, then we cannot see those objects from view
which are behind from objects closer to eye. We must remove
these hidden surfaces to get a realistic screen image. The
identification and removal of these surfaces is called Hidden-
surface problem.
There are two approaches for removing hidden surface
problems − Object-Space method and Image-space method.
The Object-space method is implemented in physical
coordinate system and image-space method is implemented in
screen coordinate system.
When we want to display a 3D object on a 2D screen, we need
to identify those parts of a screen that are visible from a
chosen viewing position.
Depth Buffer (Z-Buffer) Method
This method is developed by Cutmull. It is an image-space
approach. The basic idea is to test the Z-depth of each surface
to determine the closest (visible) surface.
In this method each surface is processed separately one pixel
position at a time across the surface. The depth values for a
pixel are compared and the closest (smallest z) surface
determines the color to be displayed in the frame buffer.
It is applied very efficiently on surfaces of polygon. Surfaces
can be processed in any order. To override the closer polygons
from the far ones, two buffers named frame buffer and depth
buffer, are used.
Depth buffer is used to store depth values for (x, y) position,
as surfaces are processed (0 ≤ depth ≤ 1).
The frame buffer is used to store the intensity value of color
value at each position (x, y).
The z-coordinates are usually normalized to the range [0, 1].
The 0 value for z-coordinate indicates back clipping pane and
1 value for z-coordinates indicates front clipping pane.

Algorithm
Step-1 − Set the buffer values −
Depthbuffer (x, y) = 0
Framebuffer (x, y) = background color
Step-2 − Process each polygon (One at a time)
For each projected (x, y) pixel position of a polygon, calculate
depth z.
If Z > depthbuffer (x, y)
Compute surface color,
set depthbuffer (x, y) = z,
framebuffer (x, y) = surfacecolor (x, y)
Advantages
 It is easy to implement.
 It reduces the speed problem if implemented in
hardware.
 It processes one object at a time.
Disadvantages
 It requires large memory.
 It is time consuming process.

Scan-Line Method
It is an image-space method to identify visible surface. This
method has a depth information for only single scan-line. In
order to require one scan-line of depth values, we must group
and process all polygons intersecting a given scan-line at the
same time before processing the next scan-line. Two
important tables, edge table and polygon table, are
maintained for this.
The Edge Table − It contains coordinate endpoints of each
line in the scene, the inverse slope of each line, and pointers
into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients,
surface material properties, other surface data, and may be
pointers to the edge table.

To facilitate the search for surfaces crossing a given scan-line,


an active list of edges is formed. The active list stores only
those edges that cross the scan-line in order of increasing x.
Also a flag is set for each surface to indicate whether a
position along a scan-line is either inside or outside the
surface.
Pixel positions across each scan-line are processed from left
to right. At the left intersection with a surface, the surface flag
is turned on and at the right, the flag is turned off. You only
need to perform depth calculations when multiple surfaces
have their flags turned on at a certain scan-line position.

Area-Subdivision Method
The area-subdivision method takes advantage by locating
those view areas that represent part of a single surface. Divide
the total viewing area into smaller and smaller rectangles until
each small area is the projection of part of a single visible
surface or no surface at all.
Continue this process until the subdivisions are easily
analyzed as belonging to a single surface or until they are
reduced to the size of a single pixel. An easy way to do this is
to successively divide the area into four equal parts at each
step. There are four possible relationships that a surface can
have with a specified area boundary.
 Surrounding surface − One that completely encloses
the area.
 Overlapping surface − One that is partly inside and
partly outside the area.
 Inside surface − One that is completely inside the area.
 Outside surface − One that is completely outside the
area.

The tests for determining surface visibility within an area can


be stated in terms of these four classifications. No further
subdivisions of a specified area are needed if one of the
following conditions is true −
 All surfaces are outside surfaces with respect to the area.
 Only one inside, overlapping or surrounding surface is in
the area.
 A surrounding surface obscures all other surfaces within
the area boundaries.

Back-Face Detection
A fast and simple object-space method for identifying the
back faces of a polyhedron is based on the "inside-outside"
tests. A point (x, y, z) is "inside" a polygon surface with plane
parameters A, B, C, and D if When an inside point is along the
line of sight to the surface, the polygon must be a back face
(we are inside that face and cannot see the front of it from our
viewing position).
We can simplify this test by considering the normal
vector N to a polygon surface, which has Cartesian
components (A, B, C).
In general, if V is a vector in the viewing direction from the
eye (or "camera") position, then this polygon is a back face if
V.N > 0
Furthermore, if object descriptions are converted to projection
coordinates and your viewing direction is parallel to the
viewing z-axis, then −
V = (0, 0, Vz) and V.N = VZC
So that we only need to consider the sign of C the component
of the normal vector N.
In a right-handed viewing system with viewing direction
along the negative ZVZV axis, the polygon is a back face if C
< 0. Also, we cannot see any face whose normal has z
component C = 0, since your viewing direction is towards that
polygon. Thus, in general, we can label any polygon as a back
face if its normal vector has a z component value −
C <= 0

Similar methods can be used in packages that employ a left-


handed viewing system. In these packages, plane parameters
A, B, C and D can be calculated from polygon vertex
coordinates specified in a clockwise direction (unlike the
counterclockwise direction used in a right-handed system).
Also, back faces have normal vectors that point away from the
viewing position and are identified by C >= 0 when the
viewing direction is along the positive ZvZv axis. By
examining parameter C for the different planes defining an
object, we can immediately identify all the back faces.
A-Buffer Method
The A-buffer method is an extension of the depth-buffer
method. The A-buffer method is a visibility detection method
developed at Lucas film Studios for the rendering system
Renders Everything You Ever Saw (REYES).
The A-buffer expands on the depth buffer method to allow
transparencies. The key data structure in the A-buffer is the
accumulation buffer.

Each position in the A-buffer has two fields −


 Depth field − It stores a positive or negative real number
 Intensity field − It stores surface-intensity information
or a pointer value

If depth >= 0, the number stored at that position is the depth


of a single surface overlapping the corresponding pixel area.
The intensity field then stores the RGB components of the
surface color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the
pixel intensity. The intensity field then stores a pointer to a
linked list of surface data. The surface buffer in the A-buffer
includes −
 RGB intensity components
 Opacity Parameter
 Depth
 Percent of area coverage
 Surface identifier
The algorithm proceeds just like the depth buffer algorithm.
The depth and opacity values are used to determine the final
color of a pixel.
Depth Sorting Method
Depth sorting method uses both image space and object-space
operations. The depth-sorting method performs two basic
functions −
 First, the surfaces are sorted in order of decreasing depth.
 Second, the surfaces are scan-converted in order, starting
with the surface of greatest depth.
The scan conversion of the polygon surfaces is performed in
image space. This method for solving the hidden-surface
problem is often referred to as the painter's algorithm. The
following figure shows the effect of depth sorting −

The algorithm begins by sorting by depth. For example, the


initial “depth” estimate of a polygon may be taken to be the
closest z value of any vertex of the polygon.
Let us take the polygon P at the end of the list. Consider all
polygons Q whose z-extents overlap P’s. Before drawing P,
we make the following tests. If any of the following tests is
positive, then we can assume P can be drawn before Q.
 Do the x-extents not overlap?
 Do the y-extents not overlap?
 Is P entirely on the opposite side of Q’s plane from the
viewpoint?
 Is Q entirely on the same side of P’s plane as the
viewpoint?
 Do the projections of the polygons not overlap?
If all the tests fail, then we split either P or Q using the plane
of the other. The new cut polygons are inserting into the depth
order and the process continues. Theoretically, this
partitioning could generate O(n2) individual polygons, but in
practice, the number of polygons is much smaller.

Basic Illumination Models

Illumination model, also known as Shading model or


Lighting model, is used to calculate the intensity of light that
is reflected at a given point on surface. There are three factors
on which lighting effect depends on:
1. Light Source : Light source is the light emitting source.
There are three types of light sources: Their position,
electromagnetic spectrum and shape determine the
lighting effect.
1. Point Sources – The source that emit rays in all
directions (A bulb in a room).
2. Parallel Sources – Can be considered as a point
source which is far from the surface (The sun).
3. Distributed Sources – Rays originate from a finite
area (A tubelight).
2. Surface : When light falls on a surface part of it is
reflected and part of it is absorbed. Now the surface
structure decides the amount of reflection and absorption
of light. The position of the surface and positions of all
the nearby surfaces also determine the lighting effect.
3. Observer : The observer’s position and sensor spectrum
sensitivities also affect the lighting effect
1. Ambient Illumination
Assume you are standing on a road, facing a building with
glass exterior and sun rays are falling on that building
reflecting back from it and the falling on the object under
observation. This would be Ambient Illumination,In simple
words, Ambient Illumination is the one where source of light
is indirect. The reflected intensity Iamb of any point on the
surface is:
2. Diffuse Reflection
Diffuse reflection occurs on the surfaces which are rough or
grainy. In this reflection the brightness of a point depends
upon the angle made by the light source and the surface. The
reflected intensity Idiff of a point on the surface is:

3. Specular Reflection
When light falls on any shiny or glossy surface most of it is
reflected back, such reflection is known as Specular
Reflection. Phong Model is an empirical model for Specular
Reflection which provides us with the formula for calculation
the reflected intensity Ispec.
Introduction of Shading
Shading is referred to as the implementation of the
illumination model at the pixel points or polygon surfaces of
the graphics objects.
Shading model is used to compute the intensities and colors to
display the surface. The shading model has two primary
ingredients: properties of the surface and properties of the
illumination falling on it. The principal surface property is its
reflectance, which determines how much of the incident light
is reflected. If a surface has different reflectance for the light
of different wavelengths, it will appear to be coloured.
An object illumination is also significant in computing
intensity. The scene may have to save illumination that is
uniform from all direction, called diffuse illumination.
Shading models determine the shade of a point on the surface
of an object in terms of a number of attributes. The shading
Mode can be decomposed into three parts, a contribution from
diffuse illumination, the contribution for one or more specific
light sources and a transparency effect. Each of these effects
contributes to shading term E which is summed to find the
total energy coming from a point on an object. This is the
energy a display should generate to present a realistic image
of the object. The energy comes not from a point on the
surface but a small area around the point.

where Epd is the energy coming from point P due to diffuse


illumination. Id is the diffuse illumination falling on the entire
scene, and Rp is the reflectance coefficient at P which ranges
from shading contribution from specific light sources will
cause the shade of a surface to vary as to its orientation
concerning the light sources changes and will also include
specular reflection effects. In the above figure, a point P on a
surface, with light arriving at an angle of incidence i, the
angle between the surface normal Np and a ray to the light
source. If the energy Ips arriving from the light source is
reflected uniformly in all directions, called diffuse reflection,
we have
Eps = (Rp cos i) Ips
This equation shows the reduction in the intensity of a surface
as it's tipped obliquely to the light source. If the angle of
incidence i exceeds 90°, the surface is hidden from the light
source and we must set Eps to zero.

Constant Intensity Shading


A fast and straightforward method for rendering an object
with polygon surfaces is constant intensity shading, also
called Flat Shading. In this method, a single intensity is
calculated for each polygon. All points over the surface of the
polygon are then displayed with the same intensity value.
Constant Shading can be useful for quickly displaying the
general appearances of the curved surface as shown in fig:
In general, flat shading of polygon facets provides an accurate
rendering for an object if all of the following assumptions are
valid:-
The object is a polyhedron and is not an approximation of an
object with a curved surface.
All light sources illuminating the objects are sufficiently far
from the surface so that N. L and the attenuation function are
constant over the surface (where N is the unit normal to a
surface and L is the unit direction vector to the point light
source from a position on the surface).
The viewing position is sufficiently far from the surface so
that V. R is constant over the surface (where V is the unit
vector pointer to the viewer from the surface position and R
represent a unit vector in the direction of ideal specular
reflection).

Gouraud shading
This Intensity-Interpolation scheme, developed by Gouraud
and usually referred to as Gouraud Shading, renders a polygon
surface by linear interpolating intensity value across the
surface. Intensity values for each polygon are coordinate with
the value of adjacent polygons along the common edges, thus
eliminating the intensity discontinuities that can occur in flat
shading.
Each polygon surface is rendered with Gouraud Shading by
performing the following calculations:
1. Determining the average unit normal vector at each
polygon vertex.
2. Apply an illumination model to each vertex to determine
the vertex intensity.
3. Linear interpolate the vertex intensities over the surface
of the polygon.
At each polygon vertex, we obtain a normal vector by
averaging the surface normals of all polygons staring that
vertex as shown in fig:

Thus, for any vertex position V, we acquire the unit vertex


normal with the calculation

Once we have the vertex normals, we can determine the


intensity at the vertices from a lighting model.
Following figures demonstrate the next step: Interpolating
intensities along the polygon edges. For each scan line, the
intensities at the intersection of the scan line with a polygon
edge are linearly interpolated from the intensities at the edge
endpoints. For example: In fig, the polygon edge with
endpoint vertices at position 1 and 2 is intersected by the
scanline at point 4. A fast method for obtaining the intensities
at point 4 is to interpolate between intensities I1 and I2 using
only the vertical displacement of the scan line.

Similarly, the intensity at the right intersection of this scan


line (point 5) is interpolated from the intensity values at
vertices 2 and 3. Once these bounding intensities are
established for a scan line, an interior point (such as point P in
the previous fig) is interpolated from the bounding intensities
at point 4 and 5 as

Incremental calculations are used to obtain successive edge


intensity values between scan lines and to obtain successive
intensities along a scan line as shown in fig:
If the intensity at edge position (x, y) is interpolated as

Then we can obtain the intensity along this edge for the next
scan line, Y-1 as

Similar calculations are used to obtain intensities at successive


horizontal pixel positions along each scan line.
When surfaces are to be rendered in color, the intensities of
each color component is calculated at the vertices. Gouraud
Shading can be connected with a hidden-surface algorithm to
fill in the visible polygons along each scan-line. An example
of an object-shaded with the Gouraud method appears in the
following figure:
Gouraud Shading discards the intensity discontinuities
associated with the constant-shading model, but it has some
other deficiencies. Highlights on the surface are sometimes
displayed with anomalous shapes, and the linear intensity
interpolation can cause bright or dark intensity streaks, called
Match bands, to appear on the surface. These effects can be
decreased by dividing the surface into a higher number of
polygon faces or by using other methods, such as Phong
shading, that requires more calculations.

You might also like