[go: up one dir, main page]

0% found this document useful (0 votes)
31 views64 pages

Computer Graphics Answers

The document provides an overview of computer graphics, including its definition, key components, and various applications across multiple industries such as entertainment, scientific visualization, and education. It also details the graphics pipeline architecture, explaining the stages involved in rendering 3D models to 2D images, as well as Bresenham's Line Drawing Algorithm for rasterizing lines. Additionally, it discusses various OpenGL and GLUT graphics functions used for rendering and managing graphics applications.

Uploaded by

ashwin13704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views64 pages

Computer Graphics Answers

The document provides an overview of computer graphics, including its definition, key components, and various applications across multiple industries such as entertainment, scientific visualization, and education. It also details the graphics pipeline architecture, explaining the stages involved in rendering 3D models to 2D images, as well as Bresenham's Line Drawing Algorithm for rasterizing lines. Additionally, it discusses various OpenGL and GLUT graphics functions used for rendering and managing graphics applications.

Uploaded by

ashwin13704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Computer Graphics and Visualization

Answer Report
Exam: June-July 2024
Scheme: 2022
Subject Code: BCG4A2

Prepared by:
P PRAJWAL (4SN23CG009)
ISHWARYA (4SN23CG004)
SHISHIR R KULAL (4SN23CG019)
SHRINIDHI ANCHAN (4SN23CG021)
SHRAVYA ULAL (4SN23CG020)

1a. What is computer graphics? Explain applications of computer


graphics.
Answer:

Definition of Computer Graphics:

Computer Graphics is the science and art of creating, manipulating, and representing
visual content using computational techniques and hardware. It involves the generation of
both static and dynamic images, ranging from simple 2D shapes to complex 3D scenes,
animations, and interactive visual experiences.

At its core, computer graphics transforms numerical and logical data into graphical
formats that are easily understood and interpreted by humans.

Key Components of Computer Graphics:

Component Description
Modeling Mathematical representation of objects in 2D or 3D space using geometries such as
points, lines, curves, polygons, and surfaces.

Rendering Process of converting models into images by simulating light interactions. Involves
rasterization, ray tracing, shading, and texture mapping.

Animation Creating motion by changing object parameters over time. Includes keyframe
animation, motion capture, and procedural animation.

Interaction Real-time user interaction with graphics using input devices (mouse, keyboard, VR
controllers). Common in games and simulations.

Applications of Computer Graphics with Real-Life Examples:

Computer Graphics has multidisciplinary applications that span almost every industry.
Below are major domains where it plays a critical role:

1. Entertainment & Media

• Movies: Use of Computer-Generated Imagery (CGI) to create realistic or fantasy


worlds.
Example: Avatar – realistic environment of Pandora and characters created using
advanced CGI.

• Video Games: Real-time 3D rendering with physics-based effects.


Example: Fortnite uses dynamic lighting, real-time shadows, and texture
mapping.

2. Scientific Visualization

• Used for visualizing complex scientific data to understand trends, anomalies, or


simulations.
• Example:

o Weather Forecasting: Visualization of storm paths and climate models


by agencies like NOAA.

o Medical Imaging: 3D reconstruction of MRI and CT scans to assist


surgeons.

3. Computer-Aided Design (CAD)

• CAD software enables engineers and architects to design, model, and simulate
real-world objects.

• Example:

o Mechanical Design: AutoCAD is used to create detailed machine part


diagrams.

o Architecture: Tools like SketchUp and Revit enable 3D walkthroughs of


building designs.

4. Virtual and Augmented Reality (VR/AR)

• Graphics enable immersive experiences that blend real and virtual environments.

• Example:

o VR Training: Flight simulators train pilots in risk-free environments.

o AR Apps: Pokémon GO overlays digital characters onto real-world


scenes.

5. Graphical User Interfaces (GUI)

• Visual representation of system functionality for easier user interaction.

• Example:

o Desktop environments like Windows/MacOS with icons and interactive


menus.
o Mobile apps with touch-friendly graphical elements like buttons, sliders,
etc.

6. Education & E-Learning

• Graphics make abstract concepts easier to grasp through visual aids and
interactivity.

• Example:

o PhET Simulations in physics and chemistry allow virtual experiments.

o 3D anatomical models for biology education using tools like BioDigital.

7. Advertising & Marketing

• Attractive visuals and animations are used for branding and product promotions.

• Example:

o 3D Product Configurators for cars on automotive websites.

o Motion Graphics in advertisements and logo animations.

8. Art & Digital Design

• Artists use software to create digital paintings, illustrations, and generative art.

• Example:

o Adobe Illustrator, Photoshop, and Procreate for digital artworks.

o Generative art created using Processing or TouchDesigner.

9. Simulation & Training

• Used for training in risk-sensitive fields like defense, medicine, and aerospace.

• Example:

o Military battlefield simulations.


o Virtual surgery platforms for medical students.

10. Geographic Information Systems (GIS)

• Enables spatial visualization and analysis using layered graphical data.

• Example:

o Google Earth for 3D terrain and satellite views.

o Urban planners simulate traffic and infrastructure layouts.

Comparison Table: Applications Overview

Domain Application Area Example

Entertainment CGI, Gaming Toy Story, Call of Duty

Scientific Research Visualization Molecular structure modeling

Engineering CAD Tesla car design in SolidWorks

Education Interactive Learning Virtual chemistry labs

Healthcare Medical Imaging 3D MRI reconstruction

VR/AR Simulation & Realism Flight simulator, Pokémon GO

Marketing Motion Graphics Animated commercials

GIS Urban Planning 3D city layout in ArcGIS

Conclusion:

Computer Graphics plays a pivotal role in both creative and technical fields by enabling
us to visualize, simulate, and interact with digital content. Its applications—from movie
production and gaming to scientific research and education—demonstrate how integral it
has become to modern computing. With the rapid evolution of GPUs, real-time
rendering engines like Unreal Engine, and AI-powered graphics, the future of
computer graphics continues to push the boundaries of realism and interactivity.
1b. Explain in detail graphics pipeline architecture.
Answer:

Graphics Pipeline Architecture: Detailed Explanation

The graphics pipeline is a sequence of stages that transforms 3D models into 2D pixels
on a screen. It is the core architecture behind real-time rendering in games, simulations,
and visualizations. Modern pipelines are implemented in GPUs using APIs like OpenGL,
Vulkan, or DirectX.

1. Stages of the Graphics Pipeline

The pipeline is divided into fixed-function (hardware-controlled)


and programmable (shader-based) stages:

A. Application Stage (CPU)

• Task:

o Prepares scene data (meshes, textures, transformations).

o Handles user input, collision detection, and LOD (Level of Detail).

• Example:

o A game engine (Unity/Unreal) sends vertex data to the GPU.

B. Geometry Processing (GPU Vertex Processing)

(i) Vertex Shading (Programmable)

• Task:

o Processes each vertex (position, color, normals).

o Applies Model-View-Projection (MVP) matrix to convert 3D → 2D.

• Math:

gl_Position = Projection × View × Model × VertexPosition;

• Example:

o Moving a character’s arm deforms vertices via skeletal animation.


(ii) Primitive Assembly

• Task:

o Groups vertices into primitives (triangles, lines, points).

(iii) Clipping & Culling

• Clipping: Removes parts of geometry outside the view frustum.

• Culling: Discards back-facing triangles (optimization).

(iv) Perspective Division & Viewport Transform

• Converts clip space → NDC (Normalized Device Coordinates).

• Maps NDC to screen space (pixel coordinates).

C. Rasterization (Fixed-Function)

• Task:

o Converts primitives into fragments (potential pixels).

• Process:

o Scan Conversion: Determines which pixels are covered by a triangle.

o Interpolation: Assigns attributes (color, UVs) to fragments.

D. Fragment Processing (Pixel Shading)

(i) Fragment Shader (Programmable)

• Task:

o Computes final color of each fragment (textures, lighting, shadows).

• Example:

o Phong shading calculates per-pixel lighting:

vec3 light = diffuse + specular + ambient;

(ii) Depth & Stencil Testing

• Depth Test (Z-Buffer):

o Discards fragments behind others (occlusion).


• Stencil Test:

o Masks regions (e.g., mirrors, UI elements).

E. Output Merging (Frame Buffer)

• Task:

o Combines fragments with the frame buffer (final image).

o Applies transparency (alpha blending).

• Output:

o Rendered image sent to the display.

2. Diagram: Graphics Pipeline Flow

3. Key Optimizations

1. Instancing: Renders multiple objects in one draw call.

2. Level of Detail (LOD): Reduces mesh complexity at a distance.

3. Deferred Shading: Separates geometry and lighting passes.

4. Real-World Example: Game Rendering


• Step 1: CPU sends a 3D character model to GPU.

• Step 2: Vertex shader applies animations (e.g., running motion).

• Step 3: Rasterizer generates fragments for each triangle.

• Step 4: Fragment shader applies textures (skin, clothes) and shadows.

• Step 5: Depth test ensures correct overlapping (e.g., hair over shoulders).

• Step 6: Final frame displayed at 60 FPS.

5. APIs & Hardware

API Pipeline Control Use Case

OpenGL Fixed + Shaders Cross-platform apps

Vulkan Fully programmable High-performance games

DirectX Microsoft ecosystem Xbox/Windows games

Conclusion

The graphics pipeline is a multi-stage process balancing CPU-GPU workload to render


complex scenes in real time. Understanding it is crucial for optimizing performance in
games, VR, and simulations.

Exam Tip: Focus on shader stages (vertex/fragment) and rasterization for technical
discussions!

Would you like a code example (e.g., OpenGL shader snippets) to illustrate pipeline
2a. With necessary steps explain Bresenham's Line Drawing Algorithm.
Consider the line from (6, 6) to (12, 8). Use the algorithm to rasterize the
line.
Answer:

Introduction:

Bresenham’s Line Drawing Algorithm is a fundamental algorithm in computer


graphics used to draw a straight line between two points on a raster display (pixel grid)
using only integer arithmetic (no floating-point operations).

It efficiently determines which pixel is the nearest to the theoretical line path at each step
using a decision parameter.

Assumptions:

We are given a line from:


Start point (x₀, y₀) = (6, 6)
End point (x₁, y₁) = (12, 8)

Let’s assume the line lies in the first octant (i.e., slope 0 < m < 1), which is the case here
since:

slope (m)=y1−y0x1−x0=8−612−6=26=13\text{slope (m)} = \frac{y_1 - y_0}{x_1 - x_0}


= \frac{8 - 6}{12 - 6} = \frac{2}{6} = \frac{1}{3}slope (m)=x1−x0y1−y0=12−68−6=62
=31

So, we increment x by 1 in each step and decide whether to increment y based on the
decision parameter.

Bresenham's Line Drawing Algorithm (for 0 < m < 1):

Step 1:
Calculate differences:

Δx=x1−x0=6,Δy=y1−y0=2\Delta x = x_1 - x_0 = 6, \quad \Delta y = y_1 - y_0 = 2Δx=x1


−x0=6,Δy=y1−y0=2

Step 2:
Initialize the decision parameter (P₀):
P0=2Δy−Δx=2(2)−6=4−6=−2P_0 = 2\Delta y - \Delta x = 2(2) - 6 = 4 - 6 = -2P0
=2Δy−Δx=2(2)−6=4−6=−2

Step 3:
Start from (x, y) = (6, 6) and move toward x = 12.

Algorithm Steps in Table Format:

Step x y P Next Pixel Chosen New P Calculation

0 6 6 -2 (7, 6) P₁ = P₀ + 2Δy = -2 + 4 = 2

1 7 6 2 (8, 7) P₂ = P₁ + 2Δy - 2Δx = 2 + 4 - 12 = -6

2 8 7 -6 (9, 7) P₃ = P₂ + 2Δy = -6 + 4 = -2

3 9 7 -2 (10, 7) P₄ = P₃ + 4 = 2

4 10 7 2 (11, 8) P₅ = P₄ + 4 - 12 = -6

5 11 8 -6 (12, 8) Done (end point reached)

Rasterized Line Pixels:

The pixels that will be turned ON (plotted) are:

(6,6), (7,6), (8,7), (9,7), (10,7), (11,8), (12,8)


Diagram of the Rasterized Line:

Here's a rough visual on a grid (Y ↑, X →):

9|

8| * *

7| * * *

6|* *

5|

+-------------------------------→ x

6 7 8 9 10 11 12

Advantages of Bresenham’s Algorithm:

• Uses only integer calculations (no floating point).

• Very efficient and suitable for real-time rendering on raster devices.

• Can be extended to draw circles and ellipses (Bresenham’s circle/ellipse


algorithms).

Conclusion:

Bresenham’s Line Drawing Algorithm provides a fast and accurate method to draw lines
in a grid-based environment like computer displays. In this example, we efficiently
rasterized a line from (6,6) to (12,8) using step-by-step decision parameters, and only
plotted pixels that approximate the ideal line closely.
2b Explain the Various Graphics Functions with Examples.
Answer:

Introduction:

OpenGL (Open Graphics Library) is a cross-platform graphics API used to render 2D


and 3D graphics. GLUT (OpenGL Utility Toolkit) is a library that provides windowing
and input handling support for OpenGL programs. Together, they offer a rich set of
functions for building interactive and high-performance graphics applications.

Categories of OpenGL/GLUT Graphics Functions:

1 Initialization & Window Management (GLUT)

These functions are used to set up the environment and manage windows.

Function Description Example

glutInit() Initializes the GLUT glutInit(&argc, argv);


library

glutInitDisplayMode() Sets display mode `glutInitDisplayMode(GLUT_RGB


(RGB, double buffering)

glutInitWindowSize() Sets window dimensions glutInitWindowSize(800, 600);

glutCreateWindow() Creates window with glutCreateWindow("OpenGL


title App");

glutMainLoop() Starts the event- glutMainLoop();


processing loop

2 Rendering Functions (OpenGL Core GL)

These are used to draw geometric primitives like points, lines, triangles, etc.

Function Description Example


glBegin(mode) / Defines a group of vertices glBegin(GL_LINES); ... glEnd();
glEnd()

glVertex2f(x, y) Specifies a 2D vertex glVertex2f(0.5f, 0.5f);

glVertex3f(x, y, z) Specifies a 3D vertex glVertex3f(0.0, 1.0, -5.0);

glColor3f(r, g, b) Sets the current color glColor3f(1.0, 0.0, 0.0);

glClear() Clears buffers (e.g., screen) glClear(GL_COLOR_BUFFER_BIT);

glFlush() Forces rendering of OpenGL glFlush();


commands

glutSwapBuffers() Swaps front and back buffer glutSwapBuffers();


(double buffering)

3️ Transformations (Modeling & Viewing)

Function Description Example

glTranslatef(x, y, z) Translate (move) object glTranslatef(0.5f, 0.0f,


0.0f);

glRotatef(angle, x, y, Rotate object glRotatef(45, 0, 0, 1);


z)

glScalef(x, y, z) Scale object glScalef(1.5, 1.5, 1);

glLoadIdentity() Loads identity matrix (resets glLoadIdentity();


transform)

4️ Projection & Viewport Setup

Function Description Example

glMatrixMode() Sets matrix mode (e.g., glMatrixMode(GL_PROJECTION);


GL_PROJECTION)
glOrtho() Defines orthographic projection glOrtho(-1, 1, -1, 1, -1, 1);

gluPerspective() Sets perspective projection gluPerspective(45.0, 1.0, 1.0, 10.0);

glViewport() Maps normalized device coords glViewport(0, 0, width, height);


to window

Example: Drawing a Colored Triangle using GLUT + OpenGL

#include <GL/glut.h>

void display() {

glClear(GL_COLOR_BUFFER_BIT); // Clear the screen

glLoadIdentity(); // Reset any transformation

glBegin(GL_TRIANGLES); // Begin drawing a triangle

glColor3f(1.0, 0.0, 0.0); // Red

glVertex2f(-0.5f, -0.5f);

glColor3f(0.0, 1.0, 0.0); // Green

glVertex2f(0.5f, -0.5f);

glColor3f(0.0, 0.0, 1.0); // Blue

glVertex2f(0.0f, 0.5f);

glEnd();

glFlush(); // Display the result

}
int main(int argc, char** argv) {

glutInit(&argc, argv); // Initialize GLUT

glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); // Single buffer, RGB mode

glutInitWindowSize(500, 500); // Window size

glutCreateWindow("Triangle in OpenGL"); // Create window

glutDisplayFunc(display); // Set display function

glutMainLoop(); // Enter event loop

return 0;

Output Diagram:

A triangle with:

• Left vertex in red

• Right vertex in green

• Top vertex in blue

Displayed in a 500×500 window with a white background.

Conclusion:

GLUT and OpenGL together offer a robust framework for building graphics applications.
You can control every aspect of rendering—from shape drawing and coloring to camera
projection and user input.
3a Explain 2D Geometric Transformations in detail
Answer:

2D geometric transformations are mathematical operations used to alter the position,


size, and orientation of objects in a 2D plane. These transformations are fundamental in
computer graphics, image processing, and game development.

1. Types of 2D Transformations

A. Translation (Linear Displacement)

• Definition: Moves an object by a fixed distance in the x and y directions.

• Matrix Representation:

[x′y′1]=[10tx01ty001]×[xy1]x′y′1=100010txty1×xy1

o (x,y)(x,y) → Original point.

o (x′,y′)(x′,y′) → Transformed point.

o tx,tytx,ty → Translation distances.

• Example:

o Moving a rectangle from (2,3)(2,3) to (5,7)(5,7) with tx=3,ty=4tx=3,ty=4.

B. Rotation (Circular Movement Around Origin)

• Definition: Rotates an object by angle θθ about the origin (0,0)(0,0).

• Matrix Representation:

[x′y′1]=[cos⁡θ−sin⁡θ0sin⁡θcos⁡θ0001]×[xy1]x′y′1=cosθsinθ0−sinθcosθ0001×xy1

o Positive θθ → Counter-clockwise rotation.

o Negative θθ → Clockwise rotation.

• Example:

o Rotating a triangle by 90∘90∘ around the origin.


C. Scaling (Resizing an Object)

• Definition: Changes the size of an object by scaling factors sxsx (x-axis) and sysy
(y-axis).

• Matrix Representation:

[x′y′1]=[sx000sy0001]×[xy1]x′y′1=sx000sy0001×xy1

o sx,sy>1sx,sy>1 → Enlargement.

o 0<sx,sy<10<sx,sy<1 → Reduction.

o Unequal scaling (sx≠sysx =sy) → Distortion.

• Example:

o Scaling a circle by sx=2,sy=1sx=2,sy=1 turns it into an ellipse.

D. Shearing (Slanting an Object)

• Definition: Skews an object along the x or y-axis.

• Matrix Representation:

o X-direction shear:

[x′y′1]=[1shx0010001]×[xy1]x′y′1=100shx10001×xy1

o Y-direction shear:

[x′y′1]=[100shy10001]×[xy1]x′y′1=1shy0010001×xy1

o shx,shyshx,shy → Shear factors.

• Example:

o Shearing a square along the x-axis to form a parallelogram.

E. Reflection (Mirror Image)

• Definition: Flips an object over a line (axis).


• Matrix Representation:

o Over X-axis:

[1000−10001]1000−10001

o Over Y-axis:

[−100010001]−100010001

o Over line y=xy=x:

[010100001]010100001

• Example:

o Reflecting a triangle over the y-axis.

3b Develop openGl program to create and rotate a triangle about the


origin and fixed point..
Answer:

#include <stdio.h>

#include <math.h>

#include <GL/glut.h>

float h = 0.0, k = 0.0, theta;

int choice;

// Function to draw the original triangle

void draw_triangle() {

glBegin(GL_LINE_LOOP);

glVertex2f(100, 100);
glVertex2f(400, 100);

glVertex2f(250, 350);

glEnd();

// Rotation about the origin

void display_about_origin() {

glClear(GL_COLOR_BUFFER_BIT);

glColor3f(1.0, 0.0, 0.0); // Red original

draw_triangle();

glPushMatrix(); // Save current matrix

glRotatef(theta, 0.0, 0.0, 1.0); // Rotate around origin

glColor3f(0.0, 0.0, 1.0); // Blue rotated

draw_triangle();

glPopMatrix(); // Restore matrix

glFlush();

// Rotation about a fixed point

void display_about_fixed_point() {

glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 0.0, 0.0); // Red original

draw_triangle();

glPushMatrix(); // Save matrix state

glTranslatef(h, k, 0.0); // Move pivot to origin

glRotatef(theta, 0.0, 0.0, 1.0); // Rotate

glTranslatef(-h, -k, 0.0); // Move back

glColor3f(0.0, 0.0, 1.0); // Blue rotated

draw_triangle();

glPopMatrix(); // Restore matrix

glFlush();

// Initialization

void myinit() {

glClearColor(1.0, 1.0, 1.0, 1.0); // White background

glColor3f(1.0, 1.0, 0.0); // Yellow (default)

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

gluOrtho2D(-500.0, 500.0, -500.0, 500.0);

glMatrixMode(GL_MODELVIEW);

int main(int argc, char **argv) {


printf("Enter your Choice:\n1 -> Rotation about Origin\n2 -> Rotation about Fixed
Point\n");

scanf("%d", &choice);

printf("Enter the Rotation Angle (in degrees): ");

scanf("%f", &theta);

if (choice == 2) {

printf("Enter the Fixed Point Coordinates (h, k): ");

scanf("%f %f", &h, &k);

glutInit(&argc, argv);

glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);

glutInitWindowSize(600, 600);

glutInitWindowPosition(100, 100);

if (choice == 1) {

glutCreateWindow("Rotation About Origin");

glutDisplayFunc(display_about_origin);

} else if (choice == 2) {

glutCreateWindow("Rotation About Fixed Point");

glutDisplayFunc(display_about_fixed_point);

} else {

printf("Invalid Choice! Exiting...\n");

return 1;
}

myinit();

glutMainLoop();

return 0;

4a. Explain homogeneous co-ordinate representation.


Answer:

Homogeneous Coordinates in Computer Graphics (Using GLUT/GL)

Homogeneous coordinates are a fundamental mathematical tool in computer graphics,


enabling efficient representation of transformations (translation, rotation,
scaling) using matrix multiplication. They are extensively used in OpenGL (GLUT) for
3D rendering.

1. What Are Homogeneous Coordinates?

• Definition:

o An extension of Cartesian coordinates (x,y,z)(x,y,z) to (x,y,z,w)(x,y,z,w),


where ww is a scaling factor (usually 1).

o Allows translation to be expressed as a matrix multiplication (unlike


Cartesian coordinates).

• Key Idea:

o A 2D point (x,y)(x,y) → (x,y,1)(x,y,1) in homogeneous coordinates.

o A 3D point (x,y,z)(x,y,z) → (x,y,z,1)(x,y,z,1).

2. Why Use Homogeneous Coordinates?


1. Unified Transformation Handling:

o All transformations (translation, rotation, scaling) can be represented


as 4×4 matrices.

2. Perspective Projection:

o Essential for 3D rendering (e.g., depth perception in OpenGL).

3. Efficient GPU Computation:

o GPUs optimize matrix operations in homogeneous space.

3. Homogeneous Transformations in OpenGL (GLUT/GL)

A. Translation

• Matrix:

• GLUT Code:

glTranslatef(tx, ty, tz); // Applies translation

B. Rotation

• Matrix (Z-axis):
• GLUT Code:

glRotatef(angle, 0, 0, 1); // Rotates around Z-axis

C. Scaling

• Matrix:

• GLUT Code:

glScalef(sx, sy, sz); // Scales the object

4b. Develop openGl progt'am to create and rotate cube.


Answer:

#include <GL/glut.h>

float angleX = 0.0f;

float angleY = 0.0f;

float angleZ = 0.0f;

void init() {

glEnable(GL_DEPTH_TEST); // Enable depth testing for 3D

glClearColor(0.1, 0.1, 0.1, 1.0); // Background color

}
// Function to draw a cube

void drawCube() {

glBegin(GL_QUADS);

// Front face (z = 1.0)

glColor3f(1, 0, 0); // Red

glVertex3f(-1, -1, 1);

glVertex3f(1, -1, 1);

glVertex3f(1, 1, 1);

glVertex3f(-1, 1, 1);

// Back face (z = -1.0)

glColor3f(0, 1, 0); // Green

glVertex3f(-1, -1, -1);

glVertex3f(-1, 1, -1);

glVertex3f(1, 1, -1);

glVertex3f(1, -1, -1);

// Top face (y = 1.0)

glColor3f(0, 0, 1); // Blue

glVertex3f(-1, 1, -1);

glVertex3f(-1, 1, 1);

glVertex3f(1, 1, 1);

glVertex3f(1, 1, -1);
// Bottom face (y = -1.0)

glColor3f(1, 1, 0); // Yellow

glVertex3f(-1, -1, -1);

glVertex3f(1, -1, -1);

glVertex3f(1, -1, 1);

glVertex3f(-1, -1, 1);

// Right face (x = 1.0)

glColor3f(1, 0, 1); // Magenta

glVertex3f(1, -1, -1);

glVertex3f(1, 1, -1);

glVertex3f(1, 1, 1);

glVertex3f(1, -1, 1);

// Left face (x = -1.0)

glColor3f(0, 1, 1); // Cyan

glVertex3f(-1, -1, -1);

glVertex3f(-1, -1, 1);

glVertex3f(-1, 1, 1);

glVertex3f(-1, 1, -1);

glEnd();

void display() {

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();

glTranslatef(0.0f, 0.0f, -7.0f); // Move cube into screen

glRotatef(angleX, 1.0, 0.0, 0.0);

glRotatef(angleY, 0.0, 1.0, 0.0);

glRotatef(angleZ, 0.0, 0.0, 1.0);

drawCube();

glutSwapBuffers(); // Use double buffering

// Idle function to update angles

void rotateCube() {

angleX += 0.5f;

angleY += 0.4f;

angleZ += 0.3f;

if (angleX > 360) angleX -= 360;

if (angleY > 360) angleY -= 360;

if (angleZ > 360) angleZ -= 360;

glutPostRedisplay(); // Mark the current window as needing to be redisplayed

void reshape(int w, int h) {

if (h == 0) h = 1;
float ratio = 1.0f * w / h;

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

gluPerspective(45.0, ratio, 1.0, 100.0);

glMatrixMode(GL_MODELVIEW);

glViewport(0, 0, w, h);

int main(int argc, char **argv) {

glutInit(&argc, argv);

glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); // Enable


double buffering and depth

glutInitWindowSize(600, 600);

glutCreateWindow("Rotating Cube - OpenGL");

init();

glutDisplayFunc(display);

glutIdleFunc(rotateCube); // Register idle function for animation

glutReshapeFunc(reshape);

glutMainLoop();

return 0;

}
Q.5 (a) Explain in detail various logical devices.
Answer:

1. Introduction

In computer graphics, logical devices refer to standardized, abstract representations of


physical input and output devices. They serve as a bridge between graphics applications
and the actual hardware, enabling device-independent interaction with the system. This
abstraction allows the same application to function seamlessly across different hardware
configurations.

The concept of logical devices is especially relevant in graphics standards such as GKS
(Graphics Kernel System) and PHIGS (Programmer’s Hierarchical Interactive
Graphics System), which define input/output handling using logical device categories.

2. Categories of Logical Devices

Logical devices are typically divided into two major categories:

A. Input Logical Devices

These are used to collect user input in various forms, depending on the nature of
interaction required by the graphics application.

Device Purpose Description Example

Locator Point Input Used to specify a position on the Selecting a point using a mouse or
screen (2D or 3D coordinates). tapping on a touchscreen.

Pick Object Used to select a displayed object by Clicking on a rectangle to select


Selection pointing. and modify it.

Choice Option Used when the user must choose Selecting an item from a
Selection from a predefined set of options. dropdown menu or GUI radio
buttons.

String Text Input Accepts a sequence of characters Typing a label or a filename using
from the user. the keyboard.

Stroke Path Input Records a series of points, generally Drawing a freehand curve using a
representing a continuous motion. pen or stylus.
Valuator Scalar Input Used to input a real (continuous) Adjusting volume or brightness
value. using a slider.

Each logical input device is associated with a specific data type, and the system interprets
inputs according to its logical type regardless of the underlying hardware.

B. Output Logical Devices

Logical output devices handle rendering visual or graphical data onto physical devices
like monitors or printers. These are abstracted similarly to input devices to ensure
flexibility and hardware independence.

Examples include:

• Display Monitors

• Printers

• Plotters

• Virtual Framebuffers

Graphics standards like GKS and PHIGS abstract these output targets so that developers
can write output code once and deploy across various platforms.

3. Advantages of Using Logical Devices

Benefit Explanation

Device Applications interact with abstract devices, not the hardware itself. This
Independence ensures compatibility across different platforms and hardware.

Application Code developed using logical devices can be reused on systems with
Portability different input/output configurations.

Simplified Developers do not need to manage hardware-level details; the system


Integration handles the interaction based on logical categories.
4. Real-World Example

Consider a drawing application that uses a locator device for point selection. The
application need not know whether the user is interacting with a:

• Mouse

• Stylus

• Touchscreen

All are handled by the system under the abstraction of the "locator" logical device. Thus,
the application logic remains unchanged, and the same software can run on desktop
and tablet environments with no additional code.

5. Conclusion

Logical devices provide an essential abstraction layer between graphics software and
physical hardware. By standardizing input and output operations, they allow for greater
flexibility, reusability, and portability in graphics applications. Whether inputting a
point or rendering an image, logical devices enable consistent functionality across diverse
environments, ensuring that graphics applications are adaptable and maintainable.

5b. Explain traditional animation technique in detail with example.


Answer:

1. Introduction

Traditional animation is the process of creating animated sequences by manually


drawing each frame. This technique was the dominant form of animation for much of the
20th century, especially in film and television, before the advent of digital and computer-
generated animation.

It relies on frame-by-frame craftsmanship, where an animator draws a series of images


that, when shown in rapid succession, create the illusion of movement. Despite being
replaced in many areas by modern tools, traditional animation remains respected for its
artistic value and foundational importance.
2. Key Techniques of Traditional Animation

Traditional animation encompasses various methods that animators use to bring motion
to drawings. Below are the primary techniques used:

1. Frame-by-Frame Animation

• Description: Each individual frame is drawn by hand, with slight variations from
one frame to the next.

• Purpose: Allows for precise control of movement and detail.

• Example: Classic Disney films such as Snow White and the Seven Dwarfs (1937),
where every motion was sketched one frame at a time.

2. Keyframe Animation

• Description: Animators draw only the keyframes (the most important poses or
positions in a sequence).

• Process: The in-between frames (also called "in-betweens" or "tweens") are


added afterward by junior animators or assistants.

• Advantage: Reduces workload while maintaining fluidity of motion.

• Example: In a character jump animation, only the take-off, mid-air, and landing
frames are initially drawn; the frames in between are added later.

3. Cel Animation

• Description: Characters are drawn on transparent celluloid sheets (cels), while


static backgrounds are drawn separately.

• Technique: Cels are layered over the background so that only moving parts need
redrawing.

• Benefit: Saves time and effort by reusing static elements across frames.

• Example: A character walking across a stationary background, where only the


character’s movement is updated while the background remains constant.
4. Rotoscoping

• Description: A technique in which animators trace over live-action footage


frame by frame to produce realistic motion.

• Use Case: Helps in achieving life-like movements, especially in complex action


scenes.

• Example: In the original Star Wars trilogy, rotoscoping was used to animate
lightsaber glows over live-action sword fights.

3. Advantages of Traditional Animation

Advantage Description

High Artistic Quality Animators have complete control over every visual element.

Stylistic Uniqueness Allows for distinct, hand-crafted visual styles.

Foundational for Helps animators understand the principles of motion and


Learning timing.

4. Limitations of Traditional Animation

Limitation Description

Time-Consuming Requires drawing dozens or hundreds of frames for a few seconds


of motion.

Labor-Intensive Involves significant human effort, coordination, and artistic skill.

Difficult to Edit or Changes often require redrawing entire sequences.


Reuse
5. Conclusion

Traditional animation laid the groundwork for the entire animation industry. While
modern techniques have increased efficiency and flexibility, traditional methods are still
admired for their craftsmanship, expressive power, and historical significance.
Understanding these methods is essential for appreciating the evolution of animation and
for developing strong foundational skills in the field.

6 (a) Explain input modes in detail with neat diagram?


Answer:

Introduction to Input Modes

Input modes in computer graphics define how data from input devices (e.g., keyboard,
mouse, joystick) is captured and processed by an application. These modes
determine when and how the system accepts user inputs, influencing interactivity and
responsiveness.

There are three fundamental input modes:

1. Request Mode

2. Sample Mode

3. Event Mode

Each mode serves distinct purposes based on application requirements.

1. Request Mode

Definition:

• The application explicitly requests input from the user and pauses
execution until the input is received.

• Synchronous and blocking.

Characteristics:

• User must respond before the program proceeds.


• Ideal for form-based applications or sequential workflows.

Example:

plaintext

Copy

Download

"Enter your name: [_________]"

// Program waits until the user types and submits.

Use Case:

• CAD software prompting for coordinates.

• Login screens requiring username/password.

2. Sample Mode

Definition:

• The application continuously samples input data without waiting for user
action.

• Asynchronous and non-blocking.

Characteristics:

• Input is read at fixed intervals (e.g., every frame in a game loop).

• Suitable for real-time systems requiring frequent updates.

Example:

while (gameRunning) {

joystickPosition = sampleJoystick();

updateCharacterMovement(joystickPosition);

}
Use Case:

• Video games tracking mouse/joystick movements.

• Sensor data monitoring (e.g., temperature readings).

3. Event Mode

Definition:

• Inputs trigger events stored in a queue, processed asynchronously by the


application.

• Event-driven architecture.

Characteristics:

• Events (e.g., clicks, key presses) are buffered and handled via callbacks.

• Maximizes responsiveness.

Example:

button.onClick = () => {

drawCircle(); // Executes only when clicked.

};

Use Case:

• GUI applications (e.g., button clicks in Photoshop).

• Mobile apps responding to touch gestures.

Comparison Table

Mode Synchrony Blocking? Best For

Request Synchronous Yes User prompts, forms

Sample Asynchronous No Real-time tracking (games)

Event Asynchronous No Interactive GUIs


6 (b) Explain character animation and periodic motions in detail.
Answer:

1. Introduction

Character animation is the process of creating lifelike movement for digital characters
through techniques like keyframing, motion capture, and rigging. Periodic motion refers
to repetitive, cyclical movements (e.g., walking or flapping wings) that can be
mathematically modeled for efficiency and realism.

2. Character Animation Techniques

(a) Keyframe Animation

Definition:

• Artists define key poses (start/end extremes), and software interpolates


intermediate frames.
Process:
1. Pose Creation: Define critical frames (e.g., contact, recoil, and passing
positions in a walk cycle).

2. Interpolation: Software (e.g., Blender) generates in-between frames using


linear or Bézier curves.
Example:

o A jumping animation with keyframes for crouch, mid-air, and landing.


Advantages:

o Full artistic control; ideal for stylized motion.

(b) Motion Capture (MoCap)

Definition:

• Records real-world actor movements via sensors and maps them onto a 3D
character.
Process:

1. Data Acquisition: Actors wear markers tracked by cameras (e.g., Vicon


systems).

2. Retargeting: Movement data is adjusted to fit the character’s skeleton.


Example:

o Gollum in Lord of the Rings used Andy Serkis’s performance.


Advantages:

o High realism; reduces manual animation time.

(c) Rigging and Skinning

Definition:

• Rigging: Creating a digital skeleton (armature) with joints and controls.

• Skinning: Binding the 3D mesh to the rig so it deforms naturally.


Process:

1. Joint Placement: Bones are positioned at articulation points (e.g., elbows,


knees).

2. Weight Painting: Define how mesh vertices follow bones (e.g., shoulder
movement affects the arm).
Example:
o A rigged humanoid character raising its arm rotates the clavicle, shoulder,
and elbow joints.
Advantages:

o Enables complex deformations (e.g., facial expressions).

3. Periodic Motion

(a) Definition and Applications

• Periodic motion repeats at fixed intervals, common in natural phenomena.

• Applications:

o Walk/run cycles in games.

o Environmental animations (e.g., swinging pendulums, fluttering flags).

(b) Examples

Motion Description Use Case

Walking Legs/arms move in a loop with 4 key Game NPCs, animated films.
Cycle phases.

Bird Flight Wings flap up/down rhythmically. Open-world game creatures.

Pendulum Sine-based swinging motion. Clock mechanics, physics


simulations.

(c) Mathematical Modeling

• Represented using trigonometric functions for smooth loops:

y(t) = A \cdot \sin(\omega t + \phi)

o A: Amplitude (movement range).

o ω: Angular frequency (speed).

o φ: Phase shift (start position).

• Example:

o A bouncing ball’s height over time:


y(t) = 0.5 \cdot \sin(2t)

(d) Benefits

1. Realism: Mimics natural repetitive motions.

2. Efficiency: Loops save memory (e.g., reusable walk cycles).

3. Scalability: Parameters (A, ω) adjust motion dynamically.

5. Conclusion

• Character animation combines artistic and technical methods (keyframing,


MoCap, rigging) to create believable motion.

• Periodic motion leverages mathematical models for efficient, reusable


animations.

• Together, they enhance realism in films, games, and simulations while optimizing
workflow.

9a. Explain the concept of hidden surface removal.


Answer:

Introduction

Hidden Surface Removal (HSR) is a fundamental concept in computer graphics, referring


to the process of identifying and eliminating parts of 3D objects that are not visible from
a particular viewpoint. The goal is to ensure that only the visible surfaces of objects are
rendered in a 2D view, thereby improving realism and rendering efficiency.

Need for HSR

In a 3D environment, multiple objects may overlap each other. Without HSR, all
surfaces—visible or not—would be rendered, resulting in a visually confusing and
computationally expensive process. HSR solves this by determining which parts of the
scene are obscured and excluding them from rendering.
Example

Consider a building behind a tree. When photographed, only the parts of the building not
blocked by the tree are visible. Similarly, in a video game, when a character moves
behind a wall, the game engine stops rendering the hidden parts of that character, as they
are occluded from the player's view.

Common HSR Techniques

1. Z-Buffer Algorithm

o Stores depth (z-value) for every pixel.

o Compares incoming pixel depth with the stored value.

o Renders the pixel only if it is closer to the viewer.

Advantages: Simple and widely used in real-time applications.

2. Painter’s Algorithm

o Renders objects from farthest to nearest.

o Closer objects overwrite farther ones.

Limitations: Cannot handle cyclic overlaps easily.

3. Back-Face Culling

o Removes surfaces facing away from the camera.

o Based on surface normal direction relative to the viewer.

Common in: 3D games and modeling software.

Workflow Diagram (Text Representation)

[3D Scene]

[Determine Viewpoint]

[Apply HSR Techniques]

[Render Visible Surfaces Only]

Applications

• Video Games: Prevent rendering of objects behind walls.

• CAD Software: Only visible parts of models are shown.

• Flight Simulators: Buildings in front occlude those behind.

• 3D Modeling Tools: Rotate models to view visible geometry only.

Conclusion

Hidden Surface Removal is essential in producing accurate, efficient, and visually correct
representations of 3D scenes on 2D displays. By eliminating unseen surfaces, HSR
optimizes rendering processes and enhances realism in computer graphics systems.

7 (a) Explain Cohen-Sutherland algorithm with example and neat


diagram.
Answer:

Introduction

The Cohen–Sutherland Line Clipping Algorithm is a popular algorithm used in 2D


computer graphics to clip a line segment to a rectangular viewing window. It avoids
unnecessary drawing operations outside the viewport and improves performance.

Key Idea

The algorithm divides the 2D space into 9 regions using a 4-bit Region Code (Outcode)
for each endpoint:
• The center region is the viewport.

• The other 8 regions surround it (top, bottom, left, right, and corners).

Each bit in the region code indicates the position of the point relative to the clipping
window.

Region Code Format (4-bit)

Bit Position Code Bit Meaning

1st (MSB) 1000 Top

2nd 0100 Bottom

3rd 0010 Right

4th (LSB) 0001 Left

Region Code 0000 → Point is inside the window


Region Code 1001 → Point is Top-Left of the window

Algorithm Steps

1. Assign Region Codes to the line endpoints.

2. If both endpoints have code 0000, accept the line (completely inside).

3. If logical AND of both codes ≠ 0 → line is completely outside (reject).

4. If neither of the above:

o Choose an endpoint that is outside the window.

o Use the region code to find intersection with the window boundary.

o Replace the outside point with this intersection.

o Repeat steps 1–4 until the line is either accepted or rejected.


Neat Diagram: Region Division & Codes

Example: Clipping Line from (10, 10) to (200, 200)

Window: (50, 50) to (150, 150)

➤ Step 1: Assign Outcodes

• Point A = (10, 10)

o Left of xmin → 1

o Below ymin → 1
→ Outcode A = 0101 (Bottom-Left)

• Point B = (200, 200)

o Right of xmax → 1

o Above ymax → 1
→ Outcode B = 1010 (Top-Right)

➤ Step 2: Logical AND

0101 AND 1010 = 0000 → Not trivially rejected

➤ Step 3: Clip One Endpoint

Clip point A against Bottom (y = 50)

Using line equation:


y=m(x−x1)+y1⇒m=y2−y1x2−x1=200−10200−10=1y = m(x - x_1) + y_1 \Rightarrow m
= \frac{y_2 - y_1}{x_2 - x_1} = \frac{200 - 10}{200 - 10} = 1y=m(x−x1)+y1⇒m=x2
−x1y2−y1=200−10200−10=1 y=50⇒50=1(x−10)+10⇒x=50y = 50 \Rightarrow 50 = 1(x
- 10) + 10 \Rightarrow x = 50y=50⇒50=1(x−10)+10⇒x=50

So, new A = (50, 50), inside

➤ Step 4: Clip Point B against Top (y = 150)

y=150⇒150=1(x−10)+10⇒x=150y = 150 \Rightarrow 150 = 1(x - 10) + 10 \Rightarrow


x = 150y=150⇒150=1(x−10)+10⇒x=150

New B = (150, 150), inside

Final Clipped Line: From (50, 50) to (150, 150)

Accepted

Final Diagram: Clipping Example

Advantages of Cohen–Sutherland Algorithm

• Efficient for rectangular clip windows.

• Uses simple bitwise operations (AND, OR).


• Reduces unnecessary drawing of lines outside the viewport.

Disadvantages

• Limited to rectangular clipping regions.

• Doesn’t handle curves or polygons directly.

Conclusion

The Cohen–Sutherland algorithm is a classic line clipping technique in 2D graphics. It


uses binary region codes and logical operations to efficiently decide line visibility and
perform clipping. This is essential in graphics pipelines for rendering scenes within view
boundaries.

7 (b) Explain in detail, The Phong Lighting model.


Answer:

Introduction

The Phong Lighting Model is a widely used local illumination model in computer
graphics for simulating realistic lighting on surfaces. It approximates the interaction of
light with a surface at a single point using three components:

• Ambient reflection

• Diffuse reflection

• Specular reflection

Together, these simulate the appearance of dull to shiny surfaces under different lighting
conditions.

1. Ambient Reflection

Purpose: Models the constant light present in the environment due to multiple
reflections.
• It does not depend on light direction or surface orientation.

• Simulates soft global illumination even where light doesn’t reach directly.

Formula:

Iambient=ka⋅IaI_{\text{ambient}} = k_a \cdot I_aIambient=ka⋅Ia

Where:

• kak_aka: Ambient reflection coefficient (0 ≤ kak_aka ≤ 1)

• IaI_aIa: Intensity of ambient light

2. Diffuse Reflection

Purpose: Simulates the scattering of light on rough surfaces.

• Depends on the angle between the surface normal (NNN) and light vector (LLL).

• Follows Lambert’s Cosine Law: Maximum intensity when the surface faces the
light directly.

Formula:

Idiffuse=kd⋅Il⋅max⁡(0,N⃗⋅L⃗)I_{\text{diffuse}} = k_d \cdot I_l \cdot \max(0, \vec{N}


\cdot \vec{L})Idiffuse=kd⋅Il⋅max(0,N⋅L)

Where:

• kdk_dkd: Diffuse reflection coefficient

• IlI_lIl: Light source intensity

• N⃗\vec{N}N: Normalized surface normal

• L⃗\vec{L}L: Normalized vector from point to light source

• N⃗⋅L⃗\vec{N} \cdot \vec{L}N⋅L: Dot product (cos θ)

3. Specular Reflection
Purpose: Models mirror-like reflections and shiny highlights.

• Depends on viewer direction and reflection vector.

• Produces highlights more intense when viewer aligns with the reflection.

Formula:

Ispecular=ks⋅Il⋅max⁡(0,R⃗⋅V⃗)nI_{\text{specular}} = k_s \cdot I_l \cdot \max(0,


\vec{R} \cdot \vec{V})^nIspecular=ks⋅Il⋅max(0,R⋅V)n

Where:

• ksk_sks: Specular reflection coefficient

• R⃗\vec{R}R: Reflection of the light vector about the normal

• V⃗\vec{V}V: View (eye) direction vector

• nnn: Shininess exponent (higher → smaller, sharper highlight)

Total Illumination

Final Color at a Surface Point:

I=Iambient+Idiffuse+IspecularI = I_{\text{ambient}} + I_{\text{diffuse}} +


I_{\text{specular}}I=Iambient+Idiffuse+Ispecular

Each component is computed per RGB channel, allowing color lighting.

Diagram: Phong Lighting Model


Example Application

Rendering a metallic sphere:

• The ambient component provides base color in shadowed areas.

• The diffuse component lights the visible part facing the light.

• The specular component creates shiny highlights on the surface where the
viewer angle matches the reflection.

Used in: OpenGL shaders, game engines (Unity, Unreal), and CAD tools.

Advantages

• Simple and efficient

• Produces visually realistic shiny surfaces

• Easy to implement using vector operations

• Works well with Gouraud or Phong Shading

Limitations

• Assumes point light source only

• Doesn’t account for shadows or global illumination

• View-dependent highlights can look artificial on dull surfaces

Comparison Table: Phong Model Components

Component Depends On Visual Effect

Ambient Constant environment Base illumination

Diffuse Angle b/w light & normal Soft lighting


Specular Angle b/w viewer & reflection Shiny highlights

Conclusion

The Phong Lighting Model is a cornerstone of real-time 3D graphics. By combining


ambient, diffuse, and specular lighting, it creates visually convincing effects suitable for
a wide range of surfaces—from dull plastics to shiny metals. Despite its simplicity, it
remains a foundational model in OpenGL and shader programming.

8 (a) Explain color models.

Answer:

This Definition

A color model is a mathematical and visual framework for describing and representing
colors using a set of primary components. These models allow colors to be specified
numerically for use in digital displays, image processing, and printing.

Each color model defines:

• A set of base colors (e.g., Red, Green, Blue),

• A coordinate system to represent colors,

• A method to combine base colors to form other colors.

Types of Color Models

1 RGB (Red, Green, Blue)

Type: Additive Color Model


Used In: Digital screens, LED displays, computer monitors, TVs.

Working Principle:

• Colors are created by adding light of three primary colors.

• Full intensity of all three = white, absence = black.


Range:

• R, G, B values range from 0 to 255 (in 8-bit systems).

• Example:

o (255, 0, 0) = Red

o (0, 255, 0) = Green

o (0, 0, 255) = Blue

o (255, 255, 255) = White

Applications:

• Web graphics

• Computer monitors

• Digital cameras

2 CMY(K) (Cyan, Magenta, Yellow, Black)

Type: Subtractive Color Model


Used In: Inkjet printing, magazines, physical prints.

Working Principle:

• Starts with white light. Colors are formed by subtracting light using ink
pigments.

• CMY are the complements of RGB:

o C=1-R

o M=1-G

o Y=1-B

Why K (Black) is added:

• Black ink improves contrast, sharpness, and detail in dark areas.

Applications:
• Printers

• Publishing industry

• Packaging and graphic design

3️ HSV (Hue, Saturation, Value)

Type: Perceptual Color Model


Used In: Color pickers, image editing, lighting design.

Working Principle:

• Mimics how humans perceive colors.

• Components:

o Hue (H): Actual color (0° to 360°) → red at 0°, green at 120°, blue at
240°

o Saturation (S): Color intensity (0 to 1) → 0 = gray, 1 = full color

o Value (V): Brightness (0 to 1) → 0 = black, 1 = full brightness

Example:

• (H=0°, S=1, V=1) = Bright red

• (H=240°, S=0.5, V=0.8) = Light blue

Applications:

• Photo editing (Photoshop, GIMP)

• Color wheel selectors

• LED lighting control

4️ YCbCr (Luminance-Chrominance)

Type: Luminance-based Model


Used In: Video compression, broadcasting (JPEG, MPEG, DVDs).

Components:
• Y: Luminance (brightness)

• Cb: Blue-difference chroma component

• Cr: Red-difference chroma component

Why it’s used:

• Human eye is more sensitive to brightness than color.

• Allows compression of color data without significant quality loss.

Applications:

• JPEG image compression

• Video codecs (H.264, MPEG-2)

• Digital TV broadcasting

Diagram: Color Model Comparison

Additive Subtractive Perceptual

(RGB) (CMYK) (HSV)

●●● ●●● ◉◉◉

RGB C M Y (K) Hue, Sat, Value

Comparison Table of Color Models

Color Type Components Used In


Model

RGB Additive Red, Green, Blue Screens, TVs, Cameras

CMY(K) Subtractive Cyan, Magenta, Yellow, Printers, Publishing


Black

HSV Perceptual Hue, Saturation, Value Color pickers, Lighting


YCbCr Luminance- Y, Cb, Cr Video compression,
based JPEG

Conclusion

Color models are foundational in computer graphics and digital imaging. Each model
serves different needs—RGB for displays, CMYK for printing, HSV for intuitive
editing, and YCbCr for compression. Understanding these models helps in optimizing
image representation, transmission, and manipulation across platforms.

8 (b) Write a short note on:

(i) Normalization and Viewport Transformation (6 Marks)

(ii) 2D Point Clipping (4 Marks)

Answer:

(i) Normalization and Viewport Transformation (6 Marks)

1. Normalization (World → NDC)

• Purpose: Converts coordinates from the world/window space to Normalized


Device Coordinates (NDC) in the range [0, 1].

• Steps:

1. Translate to origin: Shift coordinates by (Xw_min, Yw_min).

2. Scale to NDC: Normalize using window dimensions.

• Equations:

Xn=Xw−Xw_minXw_max−Xw_min,Yn=Yw−Yw_minYw_max−Yw_minXn=Xw_max
−Xw_minXw−Xw_min,Yn=Yw_max−Yw_minYw−Yw_min

o (Xw,Yw)(Xw,Yw): World coordinates.

o (Xn,Yn)(Xn,Yn): NDC (range [0, 1]).

2. Viewport Transformation (NDC → Screen)


• Purpose: Maps NDC to viewport coordinates (pixel space).

• Steps:

1. Scale NDC to viewport size.

2. Translate to viewport origin (Xv_min, Yv_min).

• Equations:

Xv=Xv_min+Xn×(Xv_max−Xv_min),Yv=Yv_min+Yn×(Yv_max−Yv_min)Xv=Xv_min
+Xn×(Xv_max−Xv_min),Yv=Yv_min+Yn×(Yv_max−Yv_min)

o (Xv,Yv)(Xv,Yv): Viewport coordinates.

Example

• Window: (0, 0) to (100, 100).

• Viewport: (200, 200) to (400, 400).

• Point in World: (50, 50)

o NDC: (0.5, 0.5)

o Viewport: (300, 300)

Diagram:

(ii) 2D Point Clipping (4 Marks)

Definition

Determines if a point (x, y) lies within a clipping window defined by boundaries:

• xmin≤x≤xmaxxmin≤x≤xmax

• ymin≤y≤ymaxymin≤y≤ymax
Algorithm

1. Check Conditions:

if (x ≥ x_min AND x ≤ x_max AND y ≥ y_min AND y ≤ y_max):

Point is ACCEPTED.

else:

Point is REJECTED.

Example

• Clipping Window: (0, 0) to (100, 100).

o Point (50, 50) → Accepted (inside).

o Point (150, 20) → Rejected (outside).

Visualization:

Applications

• Graphics Rendering: Discards points outside the view frustum.

• GUI Systems: Filters mouse clicks to active windows.

Key Differences

Aspect Normalization & Viewport Point Clipping

Purpose Coordinate mapping across spaces. Visibility testing.


Output Transformed coordinates. Binary (accept/reject).

Example Use Game object rendering. UI interaction handling.

Summary

• Normalization + Viewport: Essential for multi-resolution rendering (e.g.,


adapting to screen sizes).

• Point Clipping: Optimizes performance by culling invisible points.

Exam Tip: For (i), emphasize the mathematical steps; for (ii), focus on the clipping
condition.

9b. Explain perspective projection with neat diagram.


Answer:

Introduction

Perspective projection is a method used in computer graphics to represent three-


dimensional objects on a two-dimensional plane in a way that mimics human vision. It
gives the illusion of depth, making objects appear smaller as they move farther from the
viewer.

Definition

Perspective projection is a technique where projection lines converge at a point called the
center of projection. Unlike parallel projection, it results in a more realistic view but does
not maintain the object's relative proportions.

Key Concepts

• Center of Projection: The point where all projection lines converge.

• View Plane / Projection Plane: The plane on which the 2D image is formed.
• Object Point (P): The actual point in 3D space, at coordinates (x, y, z).

• Projected Point (P′): The resulting point on the projection plane at coordinates
(x′, y′, z′).

Mathematical Perspective Projection

Assuming the view plane is located at z=dz = dz=d, and the center of projection is at the
origin (0, 0, 0), the projected coordinates (x′, y′) of a 3D point (x, y, z) are given by:

x′=x⋅dz,y′=y⋅dz,z′=dx' = \frac{x \cdot d}{z}, \quad y' = \frac{y \cdot d}{z}, \quad z' =
dx′=zx⋅d,y′=zy⋅d,z′=d

This equation shows that as the z-value increases (object gets farther), the projected size
decreases.

Types of Perspective Projection

1. One-Point Perspective

o One axis intersects the projection plane.

o Used when viewing an object directly from the front.

2. Two-Point Perspective

o Two axes intersect the projection plane.

o Used when viewing objects at an angle, such as building corners.

3. Three-Point Perspective

o All three axes intersect the projection plane.

o Used in dynamic or tilted views, such as looking up at a tall structure.


Diagram
Applications

• Art and Technical Drawing: Creating realistic 3D renderings.

• Video Games: Enhancing visual depth perception.

• Architecture: Designing building perspectives.

• Simulation Systems: Rendering 3D environments realistically.

Conclusion

Perspective projection is crucial for achieving realism in 3D graphics. It represents how


objects diminish in size with distance, imitating human vision. Despite not preserving
object proportions, it plays a vital role in various domains requiring depth simulation.
10 a . Open GL program to draw polygon and allow user to move the
camera suitably to experiment with Perspective viewing..

10b. Explain orthographic and axonometric projections. Write the


differences.
Answer:

1. Introduction

In computer graphics and engineering drawing, projection techniques are used to


represent 3D objects on 2D surfaces such as paper or screens. Among the most widely
used projection techniques are orthographic and axonometric projections. Both are
types of parallel projection, but they serve different purposes and offer different
perspectives.

2. Orthographic Projection

Definition: Orthographic projection is a form of parallel projection where the projectors


are perpendicular to the projection plane. It allows for accurate 2D representations of
each face of a 3D object without any perspective distortion.

Features

• Projection lines are perpendicular to the view plane.

• Object dimensions are preserved; there is no scaling or foreshortening.

• Each view shows only one face of the object at a time.

• It is primarily used in engineering, architecture, and manufacturing drawings.

Types of Orthographic Views

• Front View: Shows the height and width.

• Top View (Plan): Shows the width and depth.

• Side View: Shows the height and depth.

Example

A machine component shown in blueprint format with top, front, and side views drawn
separately, each aligned on a flat 2D plane.
3. Axonometric Projection

Definition

Axonometric projection is a subtype of orthographic projection in which the object is


rotated along one or more of its axes relative to the projection plane. This allows
multiple faces of the object to be seen simultaneously, giving a pseudo-3D effect while
still using parallel lines.

Features

• Projection lines are still parallel, but not perpendicular to the plane.

• Shows multiple faces in a single view.

• Used for visualizing 3D structures in 2D.

• Involves some distortion to simulate depth, though not as strong as in perspective


projection.

Types of Axonometric Projection

1. Isometric Projection:

o All three axes are at 120° angles to each other.

o Equal foreshortening along each axis.

o Most commonly used axonometric type.

2. Dimetric Projection:

o Two axes share the same angle and scale; the third differs.

o Unequal scaling.

3. Trimetric Projection:

o All three axes have different angles and scaling.

o Most complex to construct.

Example

An isometric view of a cube where the top, front, and side faces are all visible at once,
used in technical manuals and product design.
4. Key Differences Between Orthographic and Axonometric Projection

Feature Orthographic Projection Axonometric Projection

Projection Perpendicular to projection Inclined to projection plane but parallel


Type plane

Faces Shown Only one face at a time Multiple faces simultaneously

Use Case Precise technical schematics 3D visualization in 2D for better


comprehension

Distortion None; true scale maintained Some distortion due to foreshortening

Realism Low; not visually 3D Moderate; provides a 3D-like appearance

5. Conclusion

Orthographic and axonometric projections are fundamental techniques in visualizing


3D objects in 2D space. While orthographic projection excels in precision and
dimension preservation for manufacturing and engineering purposes, axonometric
projection is better suited for giving a pseudo-3D representation to enhance
understanding of the object's structure. The choice of projection method depends on the
intended use—accuracy vs. visual comprehension.

You might also like