Computer Graphics and Image Processing (Note)
Computer Graphics and Image Processing (Note)
In essence, computer graphics provides a way to convert data and information into a
pictorial form, making it easier to analyze, understand, and communicate. It transforms
abstract data into visual representations that can include images, diagrams, charts,
animations, and interactive visuals.
The essence of computer graphics lies in its ability to express data visually. Whether it is
scientific data, architectural designs, mathematical models, or user interfaces, computer
graphics allows these concepts to be viewed and interpreted through graphic objects.
These objects may appear as photographs, vector illustrations, pie charts, bar graphs,
wireframes, or 3D animations.
1
III. Components and Tools of Computer Graphics
The development and application of computer graphics involve several critical
components, both in hardware and software. On the hardware side, display devices such
as CRT monitors, LCDs, GPUs, and input devices like graphic tablets and light pens play
essential roles. Software tools include graphic libraries, rendering engines, and
programming languages that support graphical development, such as OpenGL, WebGL,
DirectX, and various scripting languages.
For instance, a company’s performance data may be presented more effectively through a
line chart or bar graph than through raw tables. Similarly, a software interface becomes
more intuitive and user-friendly when graphic buttons and icons are used instead of
textual commands.
Computer graphics, therefore, serves as a vital communication bridge between raw data
and human understanding, particularly in fields where complex data needs to be quickly
and clearly interpreted.
2
education, it helps create simulations and visual learning tools. In engineering, it supports
computer-aided design (CAD) and 3D modeling. In science and medicine, it provides
visualization of complex phenomena, such as molecular structures or brain imaging.
i. Movie Industry: Computer graphics have revolutionized the film industry by allowing
the creation of realistic animations, special effects, and 3D environments. Many modern
movies use computer-generated imagery (CGI) to enhance scenes and create visual
spectacles that would be impossible to achieve with traditional filming techniques.
ii. Games: The gaming industry is heavily dependent on computer graphics. From 2D
arcade games to 3D virtual reality environments, computer graphics help in designing
interactive characters, backgrounds, and immersive experiences that keep players
engaged.
iii. Medical Imaging and Scientific Visualization: Computer graphics are used in the
medical field for visualizing complex organs and biological systems using MRI, CT
3
scans, and 3D models. In scientific research, they help represent data patterns,
simulations, and models of phenomena like weather systems or molecular structures.
iv. Computer Aided Design (CAD): Engineers, architects, and designers use computer
graphics tools for designing products, buildings, and machinery. CAD systems help in
creating precise technical drawings, modeling, and simulations before physical
prototypes are built.
vi. Simulators for Training: Simulators equipped with advanced graphics are used to train
pilots, ship captains, and astronauts. These virtual training environments mimic real-
world scenarios and help professionals gain practical experience in a risk-free
environment.
vii. Computer Art: Artists use digital tools to create paintings, 3D sculptures, animations,
and multimedia art. Computer graphics open a world of creativity that blends traditional
art techniques with new-age digital capabilities.
viii. Presentation Graphics: Presentation tools use computer graphics to create visual aids
such as charts, graphs, and slideshows that help convey information clearly and
attractively during business meetings or lectures.
ix. Image Processing: This involves enhancing, modifying, or analyzing images using
computer algorithms. It is commonly used in facial recognition, medical diagnostics,
satellite imaging, and forensic analysis.
4
II. Pixel and Display Representation
A display area on a digital screen is divided into tiny units called pixels (picture
elements). Each pixel is the smallest addressable unit on the screen and contains
information about its color and intensity. When these pixels are arranged in a grid-like
pattern, they collectively form an image. The resolution of the display refers to the
number of pixels it contains, which directly affects image clarity and detail.
ii. Non-Interactive Computer Graphics: Also known as passive graphics, this type of
graphics involves one-way communication where the user has no control over the image.
The display works according to pre-defined instructions written in a static program. The
output remains the same unless the program is altered. A classic example is the
television, where images and videos are shown without any user interaction.
5
Video Monitors in Computer Graphics
i. The primary output device used in a computer graphics system is known as a video
monitor. This device plays a crucial role in visualizing images, shapes, and graphical data
processed by the computer. It converts digital information into visual display, thereby
allowing users to interact with or observe the graphical content.
ii. Most video monitors operate based on the Cathode Ray Tube (CRT) technology,
which has been a foundational mechanism in early display systems before the widespread
use of modern flat-panel displays like LCDs and OLEDs.
iii. At the heart of a CRT lies a beam of electrons, which is produced by a component
called the electron gun. This beam is projected forward through a series of focusing and
deflection systems, which precisely guide the stream of electrons to hit specific positions
on the monitor’s screen.
iv. The screen of a CRT is coated with a material known as phosphor. When the directed
electron beam strikes this phosphor coating, it causes a glow of light at the contact point.
6
Each such contact results in the generation of a tiny, illuminated spot on the screen,
which collectively forms the visual image we perceive.
v. However, a critical challenge with phosphor materials is that the emitted light fades
very rapidly. This fading would cause the image to disappear quickly if not managed
properly. Therefore, a mechanism is required to maintain the visibility of the screen
image over time.
vi. To overcome this challenge, the display system is designed to refresh the image
continuously. This is achieved by redrawing or scanning the electron beam repeatedly
over the same points on the screen in quick succession. As a result, the phosphor is
continuously re-excited, thereby maintaining a steady image on the screen.
vii. Such a system is referred to as a Refresh CRT (Cathode Ray Tube). It ensures a
smooth and flicker-free display by regularly updating the image several times per second,
typically 60 times per second or more, depending on the refresh rate.
viii. The electron gun in a CRT consists of several important components, including a
heated metal cathode, which emits electrons through a process known as thermionic
emission, and a control grid, which regulates the number and speed of the emitted
electrons. These components together ensure that the electron beam is properly formed
and modulated for accurate screen rendering.
7
Cathode Ray Tube (CRT): Beam Generation, Intensity Control, Focusing, and Deflection
Systems
i. In a Cathode Ray Tube (CRT), heat is supplied to the cathode by directing an electric
current through a specially designed coil of wire known as the filament. The heating
effect causes the cathode to become hot enough for thermionic emission to occur. At this
point, electrons are essentially "boiled off" the hot cathode surface, creating a flow of free
electrons that can be accelerated and directed toward the screen.
ii. The intensity of the electron beam, which ultimately influences the brightness of the
display, is regulated by adjusting the voltage level on a component called the control grid.
The control grid is a metallic cylindrical structure placed over the cathode. It includes a
small aperture (hole) at one end through which electrons must pass. By controlling the
voltage applied to this grid, we determine how many electrons are allowed to pass
through.
iii. The brightness of the display on the phosphor-coated screen is directly proportional to
the number of electrons that strike the screen. Thus, the amount of light emitted by the
phosphor depends on the intensity of the electron beam. By carefully adjusting the
control grid’s voltage, we can increase or decrease the number of electrons passing
through the aperture, thereby controlling the brightness of the resulting image.
8
iv. If a low negative voltage is applied to the control grid, it slightly repels some of the
electrons, thereby reducing the electron flow. However, some electrons still pass through,
resulting in a moderately bright display. Conversely, if a high negative voltage is applied
to the control grid, it completely blocks the electron flow, effectively preventing any
electrons from reaching the screen and producing a dark display area.
v. As the electrons move from the electron gun toward the screen, they naturally repel
each other due to their like negative charges. If left unchecked, this mutual repulsion
would cause the electron beam to spread out or diverge, resulting in a blurry or unfocused
image. To address this, a focusing system is used to ensure that the beam converges into a
narrow, precise spot as it hits the phosphor surface.
vi. Focusing of the electron beam can be achieved through the use of either electric fields
or magnetic fields. These fields are arranged in such a way that they counteract the
natural divergence of the electron beam and guide the electrons to a single point on the
screen.
vii. A common method of beam focusing and direction control in CRTs involves the use
of magnetic deflection coils. These are pairs of electromagnetic coils mounted around the
neck of the CRT envelope. Each pair generates a magnetic field that interacts with the
moving electron beam to steer it in the desired direction. One pair of coils controls
horizontal deflection, allowing the beam to move left or right, while the other pair
handles vertical deflection, guiding the beam up and down.
viii. Alternatively, some CRT systems use electrostatic deflection instead of magnetic
deflection. In this arrangement, two pairs of parallel plates are mounted inside the CRT
envelope. One pair of plates is aligned horizontally and is responsible for controlling
vertical movement of the beam. The second pair is mounted vertically and regulates
horizontal beam deflection. When voltage is applied to these plates, the resulting electric
field deflects the electron beam accordingly, allowing precise control over where the
beam strikes the screen.
9
Persistence and Resolution in CRT Displays
i. Persistence in the context of Cathode Ray Tube (CRT) technology refers to the duration
of time it takes for the light emitted by the phosphor coating on the screen to decay to
one-tenth of its original intensity after being excited by an electron beam. When the beam
strikes the phosphor layer inside the screen, it causes the phosphor to glow and emit
visible light. This glow does not vanish immediately; instead, it gradually fades away,
and the time it takes for this fading process is what defines persistence.
ii. The level of persistence of the phosphor material directly influences the visual
performance of the display. Lower persistence phosphors, which decay more quickly, are
often used in high-speed graphical displays or animation systems because they require
faster refresh rates to maintain a continuous, flicker-free image. These are particularly
suitable for systems where images change rapidly, such as in gaming, simulations, or
video editing applications. On the other hand, higher persistence phosphors are better
suited for static image displays, such as text-based applications, where image flicker is
less of an issue.
10
iv. Resolution in CRT technology refers to the maximum number of distinct points or
pixels that can be displayed on the screen without overlapping. It represents the level of
detail or sharpness of the image that the system is capable of producing. Resolution is
typically expressed in terms of the number of horizontal and vertical pixel elements (e.g.,
1920 × 1080), and it plays a crucial role in determining the clarity, quality, and realism of
images and text displayed.
v. The higher the resolution, the greater the number of pixels available to represent visual
information, which results in sharper and more detailed images. CRT systems capable of
displaying very high resolutions are often classified as high-definition (HD) systems.
These systems are ideal for applications requiring high image fidelity, such as
professional graphics design, medical imaging, and multimedia production.
vi. It is important to note that resolution is influenced by various factors, including the
quality of the electron beam focusing system, the size of the screen, the dot pitch
(distance between phosphor dots), and the accuracy of the deflection system. All these
elements must work together efficiently to achieve and maintain a high-resolution
display.
i. The Aspect Ratio is a fundamental concept in computer graphics and display systems.
It refers to the proportional relationship between the number of horizontal points (pixels)
11
and vertical points (pixels) required to produce an image or graphical element on a
display screen in which lines appear to be of equal length in both directions—that is, a
perfect square or circle appears accurately shaped without distortion.
ii. Technically, the aspect ratio is defined as the ratio of the number of horizontal points
to the number of vertical points on the screen. For example, if a display has 800
horizontal pixels and 600 vertical pixels, the aspect ratio is expressed as 800:600, which
can be simplified to 4:3. This means that for every 4 units of width, there are 3 units of
height.
iii. Maintaining a proper aspect ratio is important because it ensures that geometric
shapes and images are displayed correctly without appearing stretched or compressed. If
the aspect ratio is not preserved, a square could appear as a rectangle, or a circle might
look like an oval. This can negatively affect user experience, particularly in applications
involving graphic design, gaming, video editing, or scientific visualization.
iv. Common aspect ratios used in display systems include 3:2, 4:3, 16:9, and 21:9. The
4:3 aspect ratio was typical of traditional CRT monitors and early computer screens,
whereas 16:9 has become the standard for most modern widescreen monitors, laptops,
and HDTVs, offering a wider viewing area suited for multimedia applications.
v. In computer graphics programming and screen design, understanding and applying the
correct aspect ratio is crucial to ensure that graphical elements maintain their intended
proportions and visual accuracy across different screen sizes and resolutions.
12
Raster Scan Display
i. A Raster Scan Display is the most common type of Cathode Ray Tube (CRT) monitor
used in computer graphics systems. It operates by systematically scanning the screen in a
specific pattern known as the raster pattern, which is similar to the way television screens
function.
ii. In a raster scan system, the electron beam is swept horizontally across the screen one
row at a time, from the top to the bottom of the screen. After reaching the end of one row
(called a scan line), the beam moves back to the beginning of the next line and continues
the process until the entire screen has been scanned from top to bottom. This complete
cycle is known as one frame.
iii. As the electron beam moves across each line, its intensity is rapidly switched on and
off in accordance with the image data stored in memory. This switching action generates
a pattern of illuminated and non-illuminated spots (pixels), which collectively form the
visual representation of images, text, and graphics on the screen.
iv. The screen in a raster scan system is treated as a matrix of discrete points, where each
point is known as a pixel (short for "picture element") or pel. These pixels are the
smallest addressable units on the screen, and they form the building blocks for all
displayed images.
v. To control what is shown on the screen, the system uses a dedicated area of memory
known as the frame buffer or refresh buffer. This buffer stores the intensity values or
color information for each pixel on the screen. Every pixel's brightness or color is defined
by the data stored in this memory location.
vi. During each refresh cycle, the raster scan system reads the contents of the frame
buffer and sends the corresponding signals to control the electron beam's behavior. As a
result, the display is constantly refreshed at high speed (typically 60 to 80 times per
second), which allows for smooth rendering of images and animations without flickering.
13
vii. The raster scan method is highly suitable for rendering realistic images, photographs,
and rich graphical content, which is why it is widely used in computer monitors,
television screens, and other display devices. It contrasts with the random scan system,
which is better suited for line drawings and vector graphics.
i. Horizontal Retrace refers to the brief period during which the electron beam, after
completing a scan of a single horizontal line (scan line), moves back to the beginning of
the next line on the left side of the screen. This movement occurs rapidly and is invisible
to the viewer. During this horizontal retrace period, the beam is turned off or blanked,
meaning it does not produce any visible light while repositioning itself. This ensures a
clean and flicker-free transition between each scan line.
ii. Vertical Retrace, on the other hand, occurs at the end of a complete frame—after all
horizontal lines of the screen have been scanned from top to bottom. At this point, the
electron beam returns from the bottom right of the screen back to the top left corner to
begin scanning the next frame. Similar to the horizontal retrace, the beam is blanked
during this vertical movement to prevent drawing unintended lines on the screen. The
vertical retrace takes slightly longer than the horizontal retrace due to the greater distance
14
covered. These retrace intervals are critical to the operation of a raster scan display, as
they synchronize the timing of image generation and ensure that the screen contents are
rendered smoothly and without visual artifacts.
Raster-Scan Systems
Raster-scan systems are the foundation of most interactive computer graphics displays,
especially those using cathode ray tube (CRT) monitors or similar raster-based display
technologies. These systems are designed to systematically draw images on the screen by
scanning lines from top to bottom, pixel by pixel. The following elaborates on the
internal operation and structure of raster-scan systems:
15
ii. Frame Buffer and Direct Memory Access
A predefined portion of the computer’s system memory is allocated as the frame buffer.
The frame buffer stores the intensity or color information for every pixel on the display
screen. To achieve high-speed performance, the video controller is granted direct access
to this frame buffer. This direct memory access (DMA) allows the controller to
continuously read data from memory without constant CPU intervention, thereby
improving display performance.
16
This coordinate convention must be taken into account when designing graphics
software, to ensure consistency between how images are stored in memory and how they
appear on the display screen.
The raster-scan video controller works by systematically accessing pixel data from the
frame buffer and converting it into signals for the display screen. This process involves
the use of control registers, scanning logic, and memory management to ensure accurate
and efficient screen refreshing. Below is a breakdown of the core steps and mechanisms
involved:
The x-register is initialized to 0, pointing to the first pixel on the leftmost edge of the scan
line.
The y-register is initialized to ymax, pointing to the topmost scan line on the display
screen.
17
the right along the same scan line. This process continues pixel-by-pixel until the entire
scan line is processed.
The y-register is decremented by 1, moving down to the next scan line on the screen.
This ensures that pixel processing continues on the next line, from left to right.
The x-register and y-register are reset to their original values (x = 0, y = ymax).
The entire refresh cycle starts again, ensuring a continuous and flicker-free display.
18
displayed on the screen. The scan conversion process is essential for translating
conceptual graphics into visual output.
Random scan displays, also known as calligraphic displays, vector displays, or stroke
displays, operate differently from raster scan systems. Instead of sweeping the screen in a
systematic, line-by-line manner, random scan systems direct the electron beam only to
those parts of the screen where image components need to be drawn.
i. In a random scan display system, the electron beam is not swept across the entire screen
uniformly as in raster scan systems. Instead, the beam is directed only to the specific
coordinates where picture elements, such as lines or curves, are to be drawn. This results
in a drawing process that resembles that of a pen-plotter or other mechanical drawing
device.
ii. The picture is constructed on the screen by drawing one line segment at a time. The
beam traces each line directly from the start point to the end point, thus making it well-
suited for line-art graphics, such as engineering drawings, architectural designs, and
wireframe models in computer-aided design (CAD) systems.
iii. The storage of picture data in a random scan system is handled differently. Instead of
using a pixel-by-pixel intensity matrix as in raster systems, the picture definition is stored
as a set of drawing instructions (or line-drawing commands) in a reserved memory area
known as the refresh display file or refresh buffer.
iv. To display the picture, the random scan system executes the drawing instructions
stored in the refresh buffer sequentially, rendering each line according to the
specifications provided. Once all lines in the picture have been drawn, the system returns
to the beginning of the instruction list and starts the process again. This refresh process is
19
continuous and repeated rapidly to prevent flickering and maintain a steady image on the
screen.
v. The refresh rate of random scan systems generally ranges between 30 to 60 frames per
second, meaning that the full set of line drawing commands is processed and redrawn 30
to 60 times every second. This is sufficient to give the human eye the illusion of a
continuously visible image.
vi. Random scan displays are highly efficient for vector-based drawings but are not well-
suited for realistic image rendering or shaded color images, since they do not support
pixel-by-pixel control. As such, they are rarely used in modern systems, having been
largely replaced by raster scan systems, especially in applications requiring full-color,
photo-realistic imagery.
20
Random-Scan Systems
Random-scan systems are used in vector graphics displays where images are composed
primarily of lines and curves rather than pixels. Unlike raster-scan systems that update
the entire screen by scanning all the pixels line-by-line, random-scan systems draw
images by directly controlling the movement of the electron beam to follow paths of
drawing primitives like lines.
21
i. In a random-scan system, an application program containing graphical instructions is
first input into the system and stored in the main memory. Along with the application, a
graphics package is also stored to facilitate the conversion of graphical commands into a
format suitable for display.
ii. The graphics package serves as an interpreter that translates the graphics commands
contained in the application program into a display file. This display file holds a list of
drawing instructions, typically consisting of coordinate data and line definitions. It is
stored in a dedicated area of system memory.
iii. Once the display file has been prepared, it is accessed continuously by a specialized
hardware component known as the display processor. This display processor reads the
display file and issues the necessary commands to render the image on the screen. It
performs this task during each refresh cycle, ensuring that the image remains visible and
flicker-free.
iv. The display processor is sometimes referred to as the display processing unit or
graphics controller, as it is responsible for the interpretation and execution of graphical
commands independent of the main CPU. This allows the CPU to be free for other
processing tasks while the display processor manages screen rendering.
vi. Each line is defined by the (x, y) coordinates of its endpoints. These coordinate values
are converted into analog deflection voltages, which guide the movement of the electron
beam across the screen to render the line.
vii. During each refresh cycle, the beam positions itself at the starting coordinate of a
line, then traces the line to its endpoint, repeating the process for every line segment in
the scene. In this way, the entire image is constructed one line at a time.
22
viii. Random-scan systems are particularly effective for applications involving
engineering drawings, wireframe models, and line-based illustrations, where precision
and speed in rendering lines are more important than pixel-level image details or color
shading.
Computer graphics is a vital field of computer science that deals with the generation,
display, and manipulation of visual images using digital devices. The backbone of
computer graphics is formed by algorithms, which are step-by-step procedures used to
draw shapes, fill regions, and display objects efficiently on a screen. Understanding these
algorithms is essential for developing software for image processing, animations,
simulations, and games.
Lines are the simplest and most fundamental graphical elements. Drawing a straight line
on a pixel-based display involves determining which pixels best represent the line
between two points. Two widely used algorithms for line generation are the Digital
Differential Analyzer (DDA) and Bresenham’s Line Algorithm.
The DDA algorithm calculates the intermediate pixels along a line using incremental
steps. It determines the difference between the x and y coordinates of the endpoints and
computes the number of steps required to move from the start to the end point. At each
step, it calculates the corresponding x and y coordinates and plots the pixel.
Example: To draw a line from point (2,2) to (8,5), the DDA algorithm calculates
intermediate pixel positions like (3,2), (4,3), (5,3), and so on, until the line reaches the
endpoint.
23
Advantages: Simple and straightforward to implement.
Disadvantages: Uses floating-point calculations, which may lead to rounding errors.
Working: Starting at one endpoint, the algorithm evaluates two possible pixels for the
next step and selects the one closer to the ideal line. This process continues until the line
is completed.
Example: In video games, Bresenham’s algorithm is often used to draw moving objects
because it is fast and precise.
Circles and ellipses are used in many graphical objects like wheels, buttons, and
decorative shapes. Accurate rendering on a pixel-based display requires specialized
algorithms.
24
ii. Midpoint Ellipse Algorithm
The ellipse algorithm extends the circle method. Since ellipses have different radii along
x and y axes, the algorithm divides the shape into two regions based on slope. Pixels are
calculated differently in each region and mirrored to complete the ellipse.
Example: Drawing an oval track in a racing game or the outline of an eye in character
design.
Polygons are shapes formed by connecting multiple line segments. Filling polygons with
color or patterns is crucial for visually meaningful graphics.
This method fills polygons by moving a horizontal line, called a scan line, from top to
bottom across the shape. For each scan line, the intersections with polygon edges are
determined, sorted, and filled between pairs of intersections.
Flood fill colors all connected pixels starting from a seed point until it reaches a
boundary. It is particularly useful for irregular regions.
Boundary fill is similar to flood fill but stops when it encounters a specified boundary
color. It is used to color regions with irregular boundaries precisely.
25
Example: Filling the petals of a flower in a digital illustration without spilling over the
edges.
i. Point Clipping
Truncates or removes lines that extend beyond the window. Two popular algorithms are:
Adjusts polygon vertices so that the shape fits within the viewing window.
v. Text Clipping
Ensures that text does not extend beyond the display boundaries.
26
Retains only the parts of objects that lie outside the viewing window. Useful in map
applications or special visual effects.
Example: When zooming into a digital map, clipping algorithms remove roads or
landmarks outside the visible area for clarity and efficiency.
Computer graphics involves the creation, manipulation, and presentation of visual content
using a combination of software, APIs, specialized hardware, and display devices.
Each component plays a critical role in producing high-quality visuals for applications
such as animation, gaming, CAD, and virtual reality.
I. Graphics Software
Graphics software provides the tools and environment for designing, editing, and
rendering visual content. Different types of software specialize in 2D, 3D, animation, or
scientific visualization.
Adobe Photoshop:
This is primarily a raster-based editor used for photo editing, digital painting, and
image manipulation. It allows users to modify individual pixels to achieve precise control
over images.
27
Example: A photographer can remove unwanted objects from a photo or enhance colors
to produce visually appealing images using Adobe Photoshop.
CorelDRAW:
This is a vector-based graphics editor used for illustrations, logos, and scalable designs.
It works with mathematical curves and points instead of pixels, ensuring that images can
be resized without losing quality.
Example: A graphic designer can create a company logo that maintains its quality when
scaled for business cards, websites, or billboards using CorelDRAW.
AutoCAD:
This is computer-aided design software used for creating precise 2D and 3D technical
drawings. It is widely used in architecture, engineering, and industrial design.
Example: An architect can draft a 3D model of a building, including interior layouts, and
examine different perspectives using AutoCAD.
Blender:
This is an open-source 3D modeling and animation software that allows sculpting,
texturing, rigging, and animating 3D objects.
Example: An animator can design a 3D character and animate its movements for a short
film using Blender.
Maya:
This is a professional 3D animation and modeling software used for character rigging,
animation, and visual effects.
Example: A game developer can create a realistic human character with natural walking
and running movements using Maya.
MATLAB:
This is scientific and engineering software used for data visualization, simulation, and
image processing.
28
Example: An engineer can simulate heat distribution in a mechanical component and
visualize it as a 3D heat map using MATLAB.
Explanation: OpenGL provides functions to draw lines, polygons, and curves, apply 2D
and 3D transformations, and handle lighting, shading, and texture mapping. It leverages
the GPU to render complex visuals in real time.
Example: A developer can create a 3D racing game track with moving vehicles and
interactive obstacles using OpenGL.
Applications of OpenGL:
Graphics hardware accelerates the processing and rendering of images, ensuring smooth
and realistic visualization.
29
depth, and transparency information for each pixel.
Example: During a 3D animation, a frame buffer can temporarily store all pixels of a
moving character to ensure smooth motion using GPU memory.
c. Graphics Card:
A graphics card combines the GPU, frame buffer, and supporting circuits into a
single device, allowing efficient rendering of frames.
Example: An architect can produce photorealistic images of buildings with shadows,
reflections, and textures quickly using a high-end graphics card.
Display hardware converts digital pixel data generated by the GPU into images visible to
the human eye. Modern displays provide high clarity, resolution, and color accuracy.
i. CRT (Cathode Ray Tube): An early display technology that uses an electron beam
to excite phosphor dots on the screen.
Example: Monitors in the 1980s and early 1990s displayed simple computer
graphics using CRT technology.
ii. LCD (Liquid Crystal Display) and LED (Light Emitting Diode) Displays: Flat-
panel displays that are lightweight, energy-efficient, and high-resolution. LCDs use
liquid crystals with a backlight, while LEDs enhance brightness, contrast, and color
accuracy.
Example: A designer can edit 4K images or play high-definition games on an LED
monitor using LCD/LED display technology.
iii. OLED (Organic LED) and VR Headsets: OLED displays provide high contrast,
vivid colors, and fast response times, while VR headsets deliver stereoscopic 3D
visuals for immersive experiences.
Example: A student can explore a 3D virtual classroom using a VR headset and
interact with objects as if physically present using OLED/VR technology.
30
a. 3D Character Animation:
A student can design a 3D character in Blender, animate it, render using OpenGL
commands, process it on a GPU, and display it on an LED monitor or VR headset.
b. Engineering Simulation:
An engineer can model a mechanical component in AutoCAD, visualize it using
OpenGL, accelerate rendering with a GPU, and examine it on a high-resolution
display.
c. Virtual Reality Application:
A developer can design a 3D environment in Maya, render it with OpenGL, process
graphics using a GPU, and experience the environment immersively through a VR
headset.
Graphics software provides the creative tools, OpenGL enables efficient rendering,
graphics hardware ensures fast computation, and display devices deliver clear, realistic
visuals. Understanding these components allows students to create interactive, high-
quality graphics applications in gaming, animation, CAD, and virtual reality.
In computer graphics and image processing, modelling refers to the process of creating a
digital or mathematical representation of objects. This allows us to visualize, simulate,
and interact with objects in a virtual environment. Models can range from simple
wireframes to complex solid representations with realistic physical properties.
Understanding modeling is essential for applications such as 3D animation, CAD,
gaming, simulation, and virtual reality.
I. Types of Modeling
i. Wireframe Modeling
31
Wireframe modeling represents objects using only edges and vertices. Think of it as a
skeleton or framework of the object, where the structure is visible but the surfaces are not
filled in.
Explanation: Wireframe models are created by connecting points (vertices) with lines
(edges) to define the shape of an object. This method is simple, fast, and computationally
efficient, but it lacks surface details, color, or texture, so the object may appear
unrealistic.
Surface modeling defines the outer layer or skin of an object without representing its
internal volume. It is often used in CAD and 3D visualization.
Explanation: In surface modeling, the focus is on creating realistic surfaces with curves,
textures, and smooth finishes. While the object appears visually complete, the interior
structure is not represented, which means it cannot simulate physical properties like mass
or volume accurately.
Solid modeling represents both the surface and the volume of an object, providing a more
realistic and complete representation.
32
Explanation: Solid models include geometric and physical properties, making it possible
to simulate mass, weight, material behavior, and interactions with other objects. This type
of modeling is widely used in engineering, mechanical design, and simulations.
There are several approaches to modeling, each suitable for different types of objects,
applications, or simulations.
i. Geometric Modeling
Geometric modeling uses mathematical equations, points, lines, and polygons to define
objects.
33
Example: In a video game, a vast forest can be created automatically using procedural
modeling, where the software generates thousands of trees with different shapes, heights,
and branch patterns algorithmically.
Example: In a car crash simulation, physical modeling ensures that the car deforms
realistically upon impact, accounting for material elasticity and collision forces.
Explanation: Each object in the model has properties (attributes) such as color, size, and
position, and actions (behaviors) it can perform, such as moving, rotating, or interacting
with other objects.
34
Understanding these concepts allows students to create realistic and interactive graphics,
simulate real-world behavior, and design efficient visualizations for various fields.
In computer graphics, ensuring that objects are properly displayed on the screen is crucial
for realism and efficiency. Techniques like clipping, hidden surface elimination, and anti-
aliasing help render scenes accurately, remove unnecessary parts, and improve visual
quality.
I. Clipping
Clipping is the process of removing or adjusting parts of objects that lie outside a
specified viewing area or window. It ensures that only the visible portions of objects are
rendered, improving efficiency and performance.
i. Point Clipping
Explanation: Determines whether a single point lies inside or outside the viewing
window. Points outside the window are ignored.
Example: In a 2D drawing program, a point plotted outside the canvas boundary will not
be displayed using point clipping.
Explanation: Removes portions of a line that lie outside the viewing area. Algorithms
like Cohen–Sutherland and Liang–Barsky are commonly used.
Example: A road in a 2D map that extends beyond the visible screen will only display
the segment inside the view using line clipping.
35
Explanation: Removes or adjusts polygons or filled areas outside the viewing window.
Example: In a 2D game, the portion of a platform outside the visible screen is clipped so
that only the part within the camera view is shown.
Explanation: Clips curves like Bezier or B-spline curves to fit within the visible
window.
Example: In vector illustration software, a curved path extending beyond the canvas
boundary is clipped to remain inside the drawing area.
v. Text Clipping
Explanation: Ensures that text appearing outside the display area is partially or fully
hidden.
Example: A scrolling text banner on a website only shows the part of the text that fits
within the container using text clipping.
Explanation: Keeps objects outside a specified region while removing everything inside.
Example: Highlighting the background of an image while removing the subject from the
center using exterior clipping.
Hidden surface elimination ensures that only visible surfaces of 3D objects are displayed,
preventing surfaces behind other objects from being rendered. This improves realism and
reduces computation.
i. Techniques
36
a. Back-Face Detection: Removes polygons facing away from the viewer.
Example: In a 3D cube, the faces on the far side from the camera are hidden using back-
face detection.
Example: In a 3D game, a tree in front of a house ensures that only the visible parts of
the house are rendered using a Z-buffer.
c. Painter’s Algorithm: Draws surfaces from back to front, ensuring closer objects
overwrite distant ones.
Example: In an animated scene, mountains in the background are drawn first, and
characters in front appear on top.
III. Anti-Aliasing
Explanation: Aliasing occurs because digital displays represent continuous objects using
discrete pixels, causing stair-step effects. Anti-aliasing algorithms calculate intermediate
colors or intensities to reduce the jagged effect.
Example:In a 2D game, a diagonal sword drawn on the screen may appear jagged
without anti-aliasing. Applying anti-aliasing smooths the edge, making the sword appear
visually continuous and realistic.
Note: Clipping ensures that only visible portions of points, lines, areas, curves, and text
are displayed, improving efficiency.
37
Hidden surface elimination prevents objects behind other objects from being drawn,
enhancing realism in 3D scenes.
Anti-aliasing reduces jagged edges, providing smooth and visually appealing images.
Mastering these techniques allows students to create high-quality, accurate, and efficient
graphics, essential for gaming, animation, CAD, and simulations.
In computer graphics and image processing, understanding how colour, image data, and
rendering techniques work is essential for producing realistic, visually appealing, and
professional-quality images. These concepts determine how objects appear on the screen,
how images are stored and manipulated, and how final visuals are generated for
animation, games, or simulations.
I. Colour Theory
Colour theory is the study of how colours are created, combined, and represented in
digital graphics. Mastering colour theory helps designers, animators, and image
processors to produce aesthetically pleasing and accurate visuals.
i. Primary Colours
Explanation: Digital displays use Red, Green, and Blue (RGB) as primary colours. By
mixing these three colours in different intensities, virtually all visible colours can be
created. The RGB model is additive, meaning that combining all three at full intensity
produces white light, while the absence of all three produces black.
Example: On a computer screen, mixing 50% red, 50% green, and 0% blue creates
yellow. This is useful when designing traffic lights, icons, or any UI elements where
precise colour reproduction is required.
38
ii. Secondary and Tertiary Colours
Explanation: Secondary colours are produced by combining two primary colours, such
as cyan (green + blue), magenta (red + blue), and yellow (red + green). Tertiary
colours result from combining a primary colour with a secondary colour. Understanding
these combinations allows for accurate colour palettes in graphics and animations.
Example: In digital painting software, mixing blue and red produces purple, which can
be used for shadows, highlights, or artistic effects in illustrations.
a. RGB Model: Represents colours as combinations of red, green, and blue light.
Commonly used for monitors, cameras, and digital displays.
b. CMYK Model: Represents colours based on Cyan, Magenta, Yellow, and
Key/Black inks, primarily used for printing.
c. HSV/HSL Models: Represent colours using Hue (colour type), Saturation
(intensity), and Value/Lightness, which is intuitive for designers when selecting or
adjusting colours.
Example: A graphic designer can adjust the hue and saturation in the HSV model to
match a brand’s colour palette accurately, ensuring consistency across digital and print
media.
i. Raster Images
39
Explanation: Raster images consist of a grid of pixels, where each pixel has a specific
colour value. This representation is ideal for photographs or images with fine details, but
scaling can lead to pixelation.
Explanation: Vector images use mathematical equations to define lines, curves, and
shapes, making them resolution-independent. They can be scaled to any size without
losing clarity.
Example: A logo designed in CorelDRAW can be scaled from a small business card to a
massive billboard while maintaining sharp edges and clean lines.
III. Rendering
i. Types of Rendering
40
a. Realistic Rendering: Simulates light interactions, shadows, reflections, and textures
for photo-realistic images.
Example: In an animated movie, a glass of water reflects and refracts light realistically,
enhancing visual fidelity.
Example: A digital comic book uses NPR rendering to give characters a hand-drawn
appearance.
c. Wireframe Rendering: Displays only edges and vertices, showing object structure
without surfaces.
Animation is the art and science of creating the illusion of motion by displaying a series
of still images in rapid succession. It plays a vital role in entertainment, education,
simulation, and visualization. Understanding animation, its applications, and basic
functions helps students develop interactive, realistic, and visually appealing graphics.
I. Applications of Animation
41
Explanation: Animation is extensively used to create movies, TV shows, and online
videos. By sequencing images, animators bring characters, backgrounds, and stories to
life.
Example: Classic Disney films like The Lion King or modern animated movies like
Frozen use frame-by-frame and computer-generated animations to depict movement,
expressions, and dynamic environments.
ii. Simulations
Example: Characters in games like Fortnite or Minecraft move, jump, and interact with
objects through precise animation functions, making the experience lifelike and engaging.
Example: Animated 3D models of the heart can show blood flow, valve operation, or
disease progression, aiding doctors and students in understanding complex systems.
42
v. Educational and Scientific Applications
Example: A biology lesson might animate cell division, showing each phase in sequence,
helping students understand the process step by step.
To create smooth and realistic animations, several core functions are used. These
functions define motion, timing, and transformation of objects in an animation.
i. Frame Generation
Example: A bouncing ball animation may generate 24 frames per second, where each
frame shows the ball at a slightly different position along its path. Displayed in rapid
succession, the ball appears to move smoothly.
ii. Interpolation
Example: In a character walk cycle, key frames show the start and end positions of the
limbs. Interpolation fills the gaps, generating the frames in between for fluid movement.
43
Explanation: Transformation refers to moving, rotating, or scaling objects over time.
Keyframing is a method where critical positions or states of objects (key frames) are
defined, and the software automatically computes transitions between them.
Example: To animate a rocket launch, key frames define its initial position on the ground
and its position at the top of the screen. Transformation functions handle the gradual
upward movement, scaling effects (if zooming in/out), and rotation to simulate realistic
launch dynamics.
Explanation: Motion paths define a trajectory for objects to follow during animation.
This allows complex movements like curves, spirals, or irregular motion patterns.
Example: A butterfly in a garden animation follows a winding flight path across flowers,
giving a natural appearance to its motion.
Explanation: Timing functions control the speed of animation, while easing functions
smooth acceleration and deceleration to mimic realistic movement.
Example: A car accelerating from rest uses ease-in to start slowly and gradually increase
speed, while ease-out slows it down before stopping, creating a natural feel.
44