[go: up one dir, main page]

0% found this document useful (0 votes)
3 views121 pages

L11 Handout

The document provides an introduction to computer graphics (CG) rendering processes in extended reality (XR), focusing on the underlying mechanisms for realistic rendering. It covers various topics including graphics device types, object representation, modeling transformations, illumination, and 3D viewing techniques. Key concepts such as frame buffers, transformation matrices, and lighting models are discussed to enhance the understanding of CG systems.

Uploaded by

dokimef585
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views121 pages

L11 Handout

The document provides an introduction to computer graphics (CG) rendering processes in extended reality (XR), focusing on the underlying mechanisms for realistic rendering. It covers various topics including graphics device types, object representation, modeling transformations, illumination, and 3D viewing techniques. Key concepts such as frame buffers, transformation matrices, and lighting models are discussed to enhance the understanding of CG systems.

Uploaded by

dokimef585
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 121

Intro to CG -

Understanding Rendering
Process in XR
Dr. Samit Bhattacharya
Dept. of Comp. Sc. & Engg.,
IIT Guwahati
Our Objective

To learn HOW it works (what goes on


in the background) – for realistic
rendering
Generic CG System
Architecture
Types of Graphics Devices

Broadly two types (based on the


method used for excitation of
pixels)
Vector Scan
Raster Scan
Raster Scan Example
Frame Buffer

 The video memory for a raster scan


system is more generally known as
frame buffer
Contains one location corresponding
to each pixel
Thus size equal to screen
resolution
Screen Refreshing

 Lights emitted from pixel elements,


after excitation, starts decaying
over time – lead to fading of the
scene after sometime
 However, pixels in a scene may get
excited at different points of time –
thus, may not fade in sync
 May lead to image distortion
Refreshing

 To avoid, what is done is to keep on


exciting the pixels periodically -
known as refreshing
 Refresh rate - number of times a
scene is refreshed per second (in Hz
or Hertz, frequency unit)
Typical rate at least 60Hz
Object Representation
Broad Types of Representation

Boundary (surface) representation

Space-partitioning
Representing region with set of non-overlapping,
contiguous solids (usually cubes)
Mesh

Connected set of
polygons (usually
triangles)
Curve Representation

Spline representation (required to


represent complex surfaces)
Spline Idea

 Use SEVERAL polynomials

 Complete curve consists of several pieces


 Each called “blending/basis function”

 All pieces are of low order


 Third order most common

 Pieces join smoothly


Voxels

Partition space into uniform grid


Grid cells are called voxels (like pixels)
Binary Space Partitions (BSPs)

Recursive partition of space by planes


Mark leaf cells as inside or outside object
Other Representations

Fractals - procedural representation


technique
Particle system –physically-based modeling
technique
Skeletal models – hierarchy of ‘bones’
(inner layer) covered with ‘skin’ (outer layer)
Scene graphs – a graph-like data structure
for objects in a scene
Application specific
Modeling Transformation
Objective

 Objects of a scene are individually represented


in their own reference frames (object/local
coordinate systems)
 Through modeling transformations, objects are
combined into a world coordinate scene
Basic Transformations (2D)

 Translation

 Rotation

 Scale

 Shear
Matrix Representation

Represent 2D transformation by a matrix


a b 
 c d 
Multiply matrix by column vector
 apply transformation to point

 x'  a b   x  x'  ax  by
 y '  c d   y  y '  cx  dy
Basic 2D Transformations

Basic 2D transformations as 3x3 matrices


(homogeneous coordinate system)

 x ' 1 0 t x   x   x '  s x 0 0  x 
 y '  0 1 t   y   y '   0
   y  
   sy 0  y 
 1  0 0 1   1   1   0 0 1  1 

Translate Scale

 x' cos   sin  0  x   x '  1 shx 0  x 


 y '   sin  0  y   y '   sh
   cos 
   y 1 0  y 
 1   0 0 1  1   1   0 0 1  1 
Rotate Shear
Basic 3D Transformations

 x' 1 0 0 0  x   x '  s x 0 0 0  x 
 y ' 0 1 0 0  y   y '  0 0  y 
 z '   0   sy 0
0 1 0  z 
 w  0 0 0 1  w  z'   0 0 sz 0  z 
    
Identity w   0 0 0 1  w 
Scale

 x ' 1 0 0 t x  x 
 y ' 0  x'   1 0 0 0  x 
  1 0 t y   y   y '  0 1 0 0  y 
 z '  0 0 1 tz  z   z'   0 0 1 0  z 
      w   0 0 0 1  w
 w  0 0 0 1  w 
Mirror about Y/Z plane
Translation
Basic 3D Transformations

 x' cos   sin  0 0  x 


 y '  sin  cos  0 0  y 
Rotate around Z axis:
 z'   0 0 1 0  z 
 w   0 0 0 1  w

 x '  cos  0 sin  0  x 


 y '  0 1 0 0  y 
Rotate around Y axis:  
 z '   sin  0 cos  0  z 
    
w   0 0 0 1  w 

 x' 1 0 0 0  x 
 y ' 0 cos   sin  0  y 
 z '   0 sin  cos 
Rotate around X axis:
0  z 
 w  0 0 0 1  w
Issue

 Sometimes, to construct world


coordinate scene, we need to
perform multiple geometric
transformations on an object
How to do that?
In what sequence?
Example

 An object (rectangle) ABCD defined in


local coordinate - used to define
chimney of the house (in world
coordinate)
 Transforming ABCD to A′B′C′D′ not
possible with single basic
transformation - we need two: scaling
and translation
 How to calculate new vertices?
Procedure

 Multiply current vertices with


the transformation matrix –
basic procedure
 However, transformation
matrix here is composition of
two matrices - scaling matrix
and the translation matrix
Procedure

 First Step – determine basic


matrices
 Object is halved in length while
the height remains the same
 Scaling matrix
Procedure

 First Step – determine basic


matrices
 Vertex D(0, 0) has now
positioned at D′(5, 5) - 5 unit
displacements along both
horizontal and vertical
directions

 Translation matrix
Procedure

 Second Step – Obtain


composite matrix
Multiply basic matrices in
sequence
We follow the right-to-left rule
in forming sequence
Procedure

 Second Step – Obtain composite


matrix
 First transformation applied on
object is the rightmost in sequence
 Next transformation is placed on
the left
 We continue in this way till the last
transformation
Procedure

 Second Step – Obtain composite


matrix
Procedure

 Third Step – obtain new


coordinate position
Multiply current vertices with
composite matrix
Procedure

 Third Step –
obtain new
coordinate
position
Multiply
current
vertices with
composite
matrix
Procedure

 Third Step – obtain new


coordinate position
 Results in homogeneous coordinates – to
get Cartesian coordinates, we divide by h
(1 for geometric transformations)
Note

Matrix multiplication is not


commutative - sequence is very
important
Wrong sequence - we will not get
correct result
Illumination (adding color)
Why Lighting?

 If we don’t have lighting effects nothing looks


three dimensional!
Definitions

 Illumination: transport of energy from light


source to surfaces & points
 Note: includes direct and indirect illumination

 Lighting: process of computing the luminous


intensity (i.e., outgoing light) at a particular 3-D
point, usually on a surface

 Shading/surface rendering: process of


assigning colors to pixels
Perception of Color
Basic Illumination Model

 Important components
Ambient light
 Diffuse reflection
 Specular reflection

Ip = Iamb + Idiff + Ispec


Reflection Types
Diffuse Reflection – Ambient
Light
 For background lighting effects, assume that
every surface is fully illuminated by the scene’s
ambient light Ia

 Therefore, the ambient contribution to the


diffuse reflection is

I ambdiff  k d I a
Diffuse Reflection

 Amount of incident light on a surface (following


Lambert’s law)
I s cos 

 Diffuse reflections component is

I diff  k d I s cos 
Diffuse Reflection
 N = surface normal,
L = unit direction
vector to the light source
 Then, N  L  cos 

 Thus,
k d I s ( N  L ) if N  L  0
I diff 
 0 if N  L  0
Specular Reflection

 Specular reflection intensity

I spec  k s I s cos ns 

k s I s (V  R ) ns if V  R  0 and N  L  0
I spec 
 0.0 if V  R  0 or N  L  0

 R can be represented as (2N.L)N-L


Things We Ignored!

Intensity attenuation (special,


angular)
Multiple light source
Surface transparency
Applying Illumination

Computing color

Fairly expensive calculation

Shading/surface rendering methods


Flat Surface Rendering

 Simplest method for rendering a polygon


surface - same color assigned to all surface
positions

 Illumination at a single point on the surface


calculated and used for entire surface

 Extremely fast, but can be unrealistic


Intensity Representation

Illumination model gives intensity as any


value in the range of 0.0 to 1.0

 A graphics system can display only a limited set


of intensity values

Calculated intensity must be converted to one


of the allowable system values (without
affecting perception)
Representing Intensities

I1/I0=I2/I1=…=In/In-1 = r
I0=I0
Ik = rkI0, k > 0
ex: B/W monitor with 8 I1 = rI0
bits/pixel
I2 = rI1 = r2I0
n = 255
r = 1.0182 (typical)

I0 = 0.01 (say)
Ints = 0.0100. 0.0102, I255=rI254=r255I0
0.0104…1…
Assign 256 bit patterns to
the 256 intensities
Intensity – IM to Device

Let I = intensity values calculated by an


illumination model (IM)

Calculate nearest intensity level Ik


supported by the device (from a table of
pre-computed intensity values) – assign bit
patterns to those levels
3D Viewing
3D Viewing
Just like taking a photograph!
World coordinates to viewing coordinates:
viewing transformations
Camera Parameters

Important camera parameters to specify


Camera (eye) position in world coordinate system
Also called viewpoint/viewing position

Center of interest
Also called look-at point

Orientation (which way is up?) View-up vector


View Coordinate Frame

Known: eye position, center of interest, view-up


vector

To find out: new origin and three basis vectors


View Coordinate Frame

 Put it all together

Eye space origin: (ex , ey, ez)


Basis vectors:
n = (eye – COI) / | eye – COI|
u = (V_up x n) / | V_up x n |
v = n x u
World to VC Transformation
 Transform object description from WC to VC
 Transformation matrix (Mw2e); P’ = Mw2v . P
WC to VC Transformation
Projection

 Map objects from 3D space to 2D screen


Defined by straight lines - projectors
Planar Geometric Projections

 Projectors are lines


that either

Converge at a center of
projection (perspective
projection)

Are parallel (parallel


projection) – center of
projection at infinity
Projection Transformation

Once WCVC transformation is done,


the 3D objects are projected on the 2D
view plane
Projection Transformation

Important things to control


(to define view volume)
Parallel projection – view
volume is rectangular
parallelepiped

Perspective projection –
view volume is a frustrum
Parallel Projection

Matrix form (assuming view plane at a distance


d from origin, along the –z direction)

 x ' '  1 0 0 0  x
 y ' ' 0 1 0 0   y 
 
 z ' '  0 0 0  d  z 
    
 w  0 0 0 1  1 

Note that this is in homogeneous coordinate


x’ = the actual projected point = x’’/w etc…
Perspective Projection

Matrix form (in homogeneous coordinate


system)

 x' ' 1 0 0 0  x 
 y ' ' 0 1 0 0  y 
   
 z ' '  0 0 1 
0 z
  0 0
1 
0  1 
 w   d 
Viewport Transformation
Transform from projection coordinates
(normalized clipping window) to device
coordinates
Window vs Viewport

Window
World-coordinate area selected for display
What is to be viewed

Viewport
Area on the display device to which a window is
mapped
Where it is to be displayed
Viewport Transformations

Transformation matrix,

 sx o tx 
M WV   0 sy ty 
 0 0 1 

sx  sy, transformed object will be scaled


(up/down)
Clipping and HSR
Clip Objects
Objects that are partially within the viewing
volume need to be clipped
Hidden Surface Removal

 We must determine
what is visible within a
scene from a chosen
viewing position
Note

Both clipping and HSR are to be done by


application of corresponding algorithms

Many algorithms available for both


Rendering
What

Pipeline stages covered so far assumed


continuous space
Methods considered points without any constraint
on the coordinates – can be any real number

To draw something on the screen, we need to


consider pixel grid
Discreet space
What

Need to map from continuous to discreet space

Mapping process called rendering


Also called scan conversion/rasterization
Line Scan Conversion

 A line segment is defined by the coordinate


positions of the line end-points

 What happens when we try to draw this on a


pixel based display?
 How to choose the correct pixels
Bresenham Line Algorithm
 Move across x axis in unit intervals and at
each step choose between two different y
coordinates
Bresenham Line Algorithm

1. Input the two line end-points, storing the left end-point


in (x0, y0)
2. Plot the point (x0, y0)
3. Calculate the constants Δx, Δy, 2Δy, and (2Δy - 2Δx) and
get the first value for the decision parameter as:
p  2y  x
4. At each xk along the line, starting at k = 0, perform the
following test. If p < 0, the next point to plot is
(xk+1, yk) and:
p  p  2 y
Bresenham Line Algorithm
Otherwise, the next point to plot is (xk+1, yk+1) and:
p  p  2y  2x
5. Repeat step 4 until x < (x2-1)

 Algorithm above assumes slopes less than 1


(|m| < 1.0), for other slopes we need to adjust
the algorithm slightly
Left Out

 Circle rendering
 Curve rendering
 Surface rendering
 Character rendering
 Anti-aliasing (to make the rendering smooth)

 There are algorithms for each


Need for Parallelism

Graphics operations - highly


parallel
Need for Parallelism

Ex - consider modeling


transformation stage
We apply transformations (e.g.,
rotation) to vertices
Need for Parallelism

Transformation = multiplication of
transformation matrix with vertex
vector
Same vector-matrix multiplication
done for all vertices we want to
transform
Need for Parallelism

 Instead of serial multiplication of one matrix-


vector pair at a time, significant gain in
performance if we perform on all vectors at
the same time
 Important in real-time rendering - millions of
vertices processed per second
GPU

 CPUs cannot take advantage of this


inherent parallelism in graphics operations –
not designed to do that
 Almost all graphics systems nowadays come
with a separate graphics card containing its
own processing unit and memory elements -
known as graphics processing unit or GPU
Multicore

GPU is multicore system - contains


a large number of cores or unit
processing elements
Each core a stream processor—
works on data streams
SM

Cores capable of performing


simple integer and floating point
arithmetic operations only
Multiple cores grouped together to
form streaming multiprocessors (SM)
SIMD Idea

 Consider geometric
transformation of vertices
Instruction (multiplication)
same; data (vertex vectors)
varies
Instance of single instruction
multiple data (SIMD)
GPU
Organization
 Each SM
designed to
perform SIMD
operations
How It Works

 Most real-time
graphics systems
assume scene
made of triangles
 Surfaces such as
quadrilaterals or
curved surface
converted to meshes
How It Works

 Through APIs
supported in graphics
library
(OpenGL/Direct3D),
triangles sent to GPU
one vertex at a time
 GPU assembles
vertices into triangles
How It Works
 Vertices expressed with
homogeneous coordinates
 Objects they define
represented in
local/modeling coordinate
 GPU performs modeling
transformations on vertices
How It Works

 Transformation
(single/composite)
achieved with a
single matrix
(transformation)-
vector (point)
multiplication
How It Works

 Multicore GPU
perform multiple
such operations
simultaneously -
multiple vertices
simultaneously
transformed
How It Works

 Output - stream of
triangles
 In world coordinate
system
 Viewer located at the
origin and view direction
aligned with z-axis
How It Works
 Next - GPU computes vertex
color based on light defined for
the scene
 Recall color can be
computed by vector dot
products and a series of add
and multiply operations
 GPU performs simultaneously
for multiple vertices
How It Works
 Next stage - each
colored 3D vertex
projected onto the view
plane
 GPU does this using matrix-
vector multiplication
 Output - stream of triangles
in screen/device
coordinates, ready to be
converted to pixels
How It Works

 Each device space


triangle, overlaps some
pixels on screen
 In rasterization stage,
these pixels are
determined
How It Works

 GPU designers over the


years incorporated
many rasterization
algorithms
 All exploit one crucial
observation: each pixel can
be treated independently
from all other pixels
How It Works

 This leads to possibility


of handling all pixels in
parallel
 Thus, given the device
space triangles, we can
determine color of pixels
for all pixels
simultaneously
How It Works

 During pixel
processing stage,
two more activities
take place
 Surface texturing
 Hidden surface
removal
How It Works

 Simplest surface
texturing method -
texture images
draped over
geometry to give
illusion of detail
 Pixel color replaced
by texture color
How It Works
 GPUs store textures in high-
speed memory
 Each pixel calculation must
access it
 Access very regular (nearby
pixels tend to access nearby
texture image locations) -
specialized memory caches
to reduce access time
How It Works

 GPUs implement
depth(Z)-buffer
algorithm for HSR
 All modern-day GPUs
contain depth-buffer as
dedicated memory
 Stores distance of
viewer from each pixel
How It Works

 GPU compares pixel’s


distance with
distance of pixel
already present
 Display memory
updated only if new
pixel is closer (see
algo in Lec 23)
Shaders
Shader Idea

 In earlier discussion, we covered all the


pipelines stages in two broad group of
activities
 Vertex (or geometry) processing
 Pixel processing
Shader Idea

 During early years of evolution, GPUs used to


come with fixed-function hardware pipeline
 All the stages are pre-programmed and embedded
into the hardware
 GPU contained dedicated components for specific tasks
(user had no control on how it should work and what
processing unit performs which stage of the pipeline)
Shader Idea

To leverage GPU power better,


modern GPUs designed to be
programmable
Fixed-function units replaced by
unified grid of processors, known as
shaders
Shader Idea

 Any processing unit can


do calculations for any
pipeline stage
 GPU elements
(processing units and
memory) can be reused
through user programs
Shader Programming
Shader Programming

 With a programable GPU, it is possible for


programmer to modify how hardware processes
vertices and shades pixels
 By writing vertex shaders and fragment shaders (also
known as vertex programs and fragment programs)
 Known as shader programming (also by many other
names including GPU programming, graphics
hardware programming etc)
Vertex Shader

Used to process vertices (i.e.,


geometry)
Modeling transformations
Lighting
Projection to screen coordinates
Fragment Shader

Programs that perform computations


in pixel processing stage to determine
How each pixel is shaded (rendering)
How texture is applied (texture
mapping)
If a pixel should be drawn (HSR)
Fragment Shader

The term fragment shader  GPU


at any instant can process a subset
(or fragment) of all the screen pixel
positions
Note

Shader programs are small pieces


of codes
Sent to graphics hardware from user
programs
Executed on graphics hardware
Conclusion
Idea

 Rendering computation intensive


 Previous discussion meant to give you an
idea
 We haven’t discussed interactive
manipulation (involves even more
computations)
Good News!

 You may never require to learn it


 You can use UIs (e.g. Unity Hub UI)
You can use graphics libraries to create s/w of
your own (e.g. OpenGL)

It always helps to be better informed.


Advantage of Knowledge
 Will be able to better appreciate the requirements (why
good prototyping and simulation requires high end
machines)
Utility of cloud-based support (if you don’t have high
end machines)
Problem with your systems (why it slows down/hangs) -
mismatch between availability of resources and your
requirements (high-quality photo-realistic rendering)
Will be able to build on your own!
To know more

Bhattacharya, S. (2015). Computer Graphics,


Oxford University Press, ISBN-13: 978-0-19-
809619-1; ISBN-10: 0-19-809619-4.
My NPTEL course on “computer graphics”

You might also like