UNIT-I Notes
UNIT-I Notes
History of computer
The first counting device was used by the primitive people. They used sticks, stones and bones
as counting tools. As human mind and technology improved with time more computing devices
were developed. Some of the popular computing devices starting with the first to recent ones
are described below;
(1) Abacus
The history of computer begins with the birth of abacus which is believed to be the first
computer. It is said that Chinese invented Abacus around 4,000 years ago.
It was a wooden rack which has metal rods with beads mounted on them. The beads were
moved by the abacus operator according to some rules to perform arithmetic calculations.
Abacus is still used in some countries like China, Russia and Japan. An image of this tool is
shown below;
It was a manually-operated calculating device which was invented by John Napier (1550-1617)
of Merchiston. In this calculating tool, he used 9 different ivory strips or bones marked with
numbers to multiply and divide. So, the tool became known as "Napier's Bones. It was also the
first machine to use the decimal point.
(3) Pascaline
Pascaline is also known as Arithmetic Machine or Adding Machine. It was invented between
1642 and 1644 by a French mathematician-philosopher Biaise Pascal. It is believed that it was
the first mechanical and automatic calculator.
Pascal invented this machine to help his father, a tax accountant. It could only perform addition
and subtraction. It was a wooden box with a series of gears and wheels. When a wheel is rotated
one revolution, it rotates the neighboring wheel. A series of windows is given on the top of the
wheels to read the totals. An image of this tool is shown below;
In the early 1820s, it was designed by Charles Babbage who is known as "Father of Modern
Computer". It was a mechanical computer which could perform simple calculations. It was a
steam driven calculating machine designed to solve tables of numbers like logarithm tables.
This calculating machine was also developed by Charles Babbage in 1830. It was a mechanical
computer that used punch-cards as input. It was capable of solving any mathematical problem
and storing information as a permanent memory.
It was the first electronic computer introduced in the United States in 1930. It was an analog
device invented by Vannevar Bush. This machine has vacuum tubes to switch electrical signals
to perform calculations. It could do 25 calculations in few minutes.
(9) Mark I
The next major changes in the history of computer began in 1937 when Howard Aiken planned
to develop a machine that could perform calculations involving large numbers. In 1944, Mark
I computer was built as a partnership between IBM and Harvard. It was the first programmable
digital computer.
Generations of Computers
A generation of computers refers to the specific improvements in computer technology with
time. In 1946, electronic pathways called circuits were developed to perform the counting. It
replaced the gears and other mechanical parts used for counting in previous computing
machines.
In each new generation, the circuits became smaller and more advanced than the previous
generation circuits. The miniaturization helped increase the speed, memory and power of
computers. There are five generations of computers which are described below;
The first generation (1946-1959) computers were slow, huge and expensive. In these
computers, vacuum tubes were used as the basic components of CPU and memory. These
computers were mainly depended on batch operating system and punch cards. Magnetic tape
and paper tape were used as output and input devices in this generation;
The second generation (1959-1965) was the era of the transistor computers. These computers
used transistors which were cheap, compact and consuming less power; it made transistor
computers faster than the first generation computers.
In this generation, magnetic cores were used as the primary memory and magnetic disc and
tapes were used as the secondary storage. Assembly language and programming languages like
COBOL and FORTRAN, and Batch processing and multiprogramming operating systems were
used in these computers.
o IBM 1620
o IBM 7094
o CDC 1604
o CDC 3600
o UNIVAC 1108
Third Generation Computers
The third generation computers used integrated circuits (ICs) instead of transistors. A single IC
can pack huge number of transistors which increased the power of a computer and reduced the
cost. The computers also became more reliable, efficient and smaller in size. These generation
computers used remote processing, time-sharing, multi programming as operating system.
Also, the high-level programming languages like FORTRON-II TO IV, COBOL, PASCAL
PL/1, ALGOL-68 were used in this generation.
o IBM-360 series
o Honeywell-6000 series
o PDP(Personal Data Processor)
o IBM-370/168
o TDC-316
The fourth generation (1971-1980) computers used very large scale integrated (VLSI) circuits;
a chip containing millions of transistors and other circuit elements. These chips made this
generation computers more compact, powerful, fast and affordable. These generation
computers used real time, time sharing and distributed operating system. The programming
languages like C, C++, DBASE were also used in this generation.
o DEC 10
o STAR 1000
o PDP 11
o CRAY-1(Super Computer)
o CRAY-X-MP(Super Computer)
In fifth generation (1980-till date) computers, the VLSI technology was replaced with ULSI
(Ultra Large Scale Integration). It made possible the production of microprocessor chips with
ten million electronic components. This generation computers used parallel processing
hardware and AI (Artificial Intelligence) software. The programming languages used in this
generation were C, C++, Java, .Net, etc.
o Desktop
o Laptop
o NoteBook
o UltraBook
o ChromeBook
Types of Computer
We can categorize computer in two ways: on the basis of data handling capabilities and size.
o Analogue Computer
o Digital Computer
o Hybrid Computer
1) Analogue Computer
Analogue computers are designed to process analogue data. Analogue data is continuous data
that changes continuously and cannot have discrete values. We can say that analogue
computers are used where we don't need exact values always such as speed, temperature,
pressure and current.
Analogue computers directly accept the data from the measuring device without first
converting it into numbers and codes. They measure the continuous changes in physical
quantity and generally render output as a reading on a dial or scale. Speedometer and mercury
thermometer are examples of analogue computers.
2) Digital Computer
Digital computer is designed to perform calculations and logical operations at high speed. It
accepts the raw data as input in the form of digits or binary numbers (0 and 1) and processes it
with programs stored in its memory to produce the output. All modern computers like laptops,
desktops including smartphones that we use at home or office are digital computers.
3) Hybrid Computer
Hybrid computer has features of both analogue and digital computer. It is fast like an
analogue computer and has memory and accuracy like digital computers. It can process both
continuous and discrete data. It accepts analogue signals and convert them into digital form
before processing. So, it is widely used in specialized applications where both analogue and
digital data is processed. For example, a processor is used in petrol pumps that converts the
measurements of fuel flow into quantity and price. Similarly, they are used in airplanes,
hospitals, and scientific applications.
Advantages of using hybrid computers:
o Its computing speed is very high due to the all-parallel configuration of the analogue
subsystem.
o It produces precise and quick results that are more accurate and useful.
o It has the ability to solve and manage big equation in real-time.
o It helps in the on-line data processing.
1) Supercomputer
Supercomputers are the biggest and fastest computers. They are designed to process huge
amount of data. A supercomputer can process trillions of instructions in a second. It has
thousands of interconnected processors.
2) Mainframe computer
3) Miniframe or Minicomputer
Applications of minicomputers:
A minicomputer is mainly used to perform three primary functions, which are as follows:
o Process control: It was used for process control in manufacturing. It mainly performs
two primary functions that are collecting data and feedback. If any abnormality occurs
in the process, it is detected by the minicomputer and necessary adjustments are made
accordingly.
o Data management: It is an excellent device for small organizations to collect, store
and share data. Local hospitals and hotels can use it to maintain the records of their
patients and customers respectively.
o Communications Portal: It can also play the role of a communication device in larger
systems by serving as a portal between a human operator and a central processor or
computer.
4) Workstation
Workstation is a single user computer that is designed for technical or scientific applications.
It has a faster microprocessor, a large amount of RAM and high speed graphic adapters. It
generally performs a specific job with great expertise; accordingly, they are of different types
such as graphics workstation, music workstation and engineering design workstation.
Characteristics of workstation computer:
o It is a high-performance computer system designed for a single user for business or
professional use.
o It has larger storage capacity, better graphics, and more powerful CPU than a personal
computer.
o It can handle animation, data analysis, CAD, audio and video creation and editing.
Any computer that has the following five features, can be termed as a workstation or can be
used as a workstation.
o Multiple Processor Cores: It has more processor cores than simple laptops or
computers.
o ECC RAM: It is provided with Error-correcting code memory that can fix memory
errors before they affect the system's performance.
o RAID (Redundant Array of Independent Disks): It refers to multiple internal hard
drives to store or process data. RAID can be of different types, for example, there can
be multiple drives to process data or mirrored drives where if one drive does not work
than other starts functioning.
o SSD: It is better than conventional hard-disk drives. It does not have moving parts, so
the chances of physical failure are very less.
o Optimized, Higher end GPU: It reduces the load on CPU. E.g., CPU has to do less
work while processing the screen output.
5) Microcomputer
Characteristics of a microcomputer:
o It is the smallest in size among all types of computers.
o A limited number of software can be used.
o It is designed for personal work and applications. Only one user can work at a time.
o It is less expansive and easy to use.
o It does not require the user to have special skills or training to use it.
o Generally, comes with single semiconductor chip.
o It is capable of multitasking such as printing, scanning, browsing, watching videos, etc.
Computer Components
There are 5 main computer components that are given below is used for organization of
computer:
o Input Devices
o CPU
o Output Devices
o Primary Memory
o Secondary Memory
1) Inputting: It is the process of entering raw data, instructions and information into the
computer. It is performed with the help of input devices.
2) Storing: The computer has primary memory and secondary storage to store data and
instructions. It stores the data before sending it to CPU for processing and also stores the
processed data before displaying it as output.
3) Processing: It is the process of converting the raw data into useful information. This process
is performed by the CPU of the computer. It takes the raw data from storage, processes it and
then sends back the processed data to storage.
4) Outputting: It is the process of presenting the processed data through output devices like
monitor, printer and speakers.
5) Controlling: This operation is performed by the control unit that is part of CPU. The control
unit ensures that all basic operations are executed in a right manner and sequence.
Component of CPU
A computer is made up of multiple elements that help in processing and effective functioning.
In this article, we are going to discuss about parts of the CPU. Hope this article will be helpful
to you in order to take information about the parts of CPU. So, without delay, let's start the
topic.
Before discussing the parts, let's see a brief description of the CPU.
CPU
CPU is an acronym for "Central Processing Unit". It is also known as the brain of the computer.
CPU receives the instructions from active software and hardware. It produces the output
accordingly. In addition, it stores data, instructions, and intermediate results. Thus, CPU
controls the operations of all computer parts.
Central Processing Unit (CPU) = Arithmetic and Logical Unit (ALU) + Control Unit (CU)
It carries out instructions and computer programs and performs all the basic arithmetic and
logical operations. CPU has three major types: Transistor CPUs, Small scale integration CPU,
and Large scale integration CPUs. Now, let's discuss about the parts of the CPU.
Components
There are three components of a Central Processing Unit (CPU) that are listed as follows -
As the name implies, the Arithmetic and Logical Unit (or ALU) performs arithmetic and logical
functions. The arithmetic functions are addition, subtraction, multiplication division, and
comparisons, whereas the logical functions mainly include selecting, comparing, and merging
the data. The complex operations are performed by making repetitive use of the operations
mentioned above.
There can be more than one ALU in a CPU. Furthermore, ALUs can be used for maintaining
timers that help run the computer.
Now, move forward and discuss the next part of the Central Processing Unit (CPU), i.e.,
Control Unit (CU).
The control Unit is one of the crucial components of the CPU. It instructs the complete
computer system to perform a particular task. It controls and coordinates the functioning of all
parts of the computer. It takes instructions from memory and then decodes and executes these
instructions.
The Control unit mainly regulates and maintains the information's flow across the processor. It
does not take part in storing data and processing.
Memory Unit
This unit temporarily stores data, programs, and intermediate and final results of processing.
So, it acts as a temporary storage area that holds the data temporarily, which is used to run the
computer. The memory units supplies the data to other units of the computer whenever it is
required to do so. It is also called primary memory or main memory, or the internal storage
unit. The functions of the memory unit are listed as follows -
o The data and instructions that are required for processing are stored in a memory unit.
o All inputs/outputs are transmitted by the main memory.
o The memory unit also stores the intermediate results and final results of processing.
That's all about the parts of the CPU. So, there are three CPU components: Arithmetic and
Logical Unit (ALU), Control Unit (CU), and memory unit.
Input device enables the user to send data, information, or control signals to a computer. The
Central Processing Unit (CPU) of a computer receives the input and processes it to produce
the output.
1. Keyboard
2. Mouse
3. Scanner
4. Joystick
5. Light Pen
6. Digitizer
7. Microphone
8. Magnetic Ink Character Recognition (MICR)
9. Optical Character Reader (OCR)
10. Digital Camera
11. Paddle
12. Steering Wheel
13. Gesture recognition devices
14. Light Gun
15. Touch Pad
16. Remote
17. Touch screen
18. VR
19. Webcam
20. Biometric Devices
Output Devices
The output device displays the result of the processing of raw data that is entered in the
computer through an input device. There are a number of output devices that display output in
different ways such as text, images, hard copies, and audio or video.
1. Monitor
o CRT Monitor
o LCD Monitor
o LED Monitor
o Plasma Monitor
2. Printer
o Impact Printers
A. Character Printers
i. Dot Matrix printers
ii. Daisy Wheel printers
B. Line printers
i. Drum printers
ii. Chain printers
o Non-impact printers
A. Laser printers
B. Inkjet printers
3. Projector
Types of Computer Memory
The computer memory holds the data and instructions needed to process raw data and produce
output. The computer memory is divided into large number of small parts known as cells. Each
cell has a unique address which varies from 0 to memory size minus one.
Computer memory is of two types: Volatile (RAM) and Non-volatile (ROM). The secondary
memory (hard disk) is referred as storage not memory.
o Register memory
o Cache memory
o Primary memory
o Secondary memory
Register Memory
Register memory is the smallest and fastest memory in a computer. It is not a part of the main
memory and is located in the CPU in the form of registers, which are the smallest data holding
elements. A register temporarily holds frequently used data, instructions, and memory address
that are to be used by CPU. They hold instructions that are currently processed by the CPU.
All data is required to pass through registers before it can be processed. So, they are used by
CPU to process the data entered by the users.
Registers hold a small amount of data around 32 bits to 64 bits. The speed of a CPU depends
on the number and size (no. of bits) of registers that are built into the CPU. Registers can be of
different types based on their uses. Some of the widely used Registers include Accumulator or
AC, Data Register or DR, the Address Register or AR, Program Counter (PC), I/O Address
Register, and more.
Cache Memory
Cache memory is a high-speed memory, which is small in size but faster than the main memory
(RAM). The CPU can access it more quickly than the primary memory. So, it is used to
synchronize with high-speed CPU and to improve its performance.
Cache memory can only be accessed by CPU. It can be a reserved part of the main memory or
a storage device outside the CPU. It holds the data and programs which are frequently used by
the CPU. So, it makes sure that the data is instantly available for CPU whenever the CPU needs
this data. In other words, if the CPU finds the required data or instructions in the cache memory,
it doesn't need to access the primary memory (RAM). Thus, by acting as a buffer between
RAM and CPU, it speeds up the system performance.
L1: It is the first level of cache memory, which is called Level 1 cache or L1 cache. In this type
of cache memory, a small amount of memory is present inside the CPU itself. If a CPU has
four cores (quad core cpu), then each core will have its own level 1 cache. As this memory is
present in the CPU, it can work at the same speed as of the CPU. The size of this memory
ranges from 2KB to 64 KB. The L1 cache further has two types of caches: Instruction cache,
which stores instructions required by the CPU, and the data cache that stores the data required
by the CPU.
L2: This cache is known as Level 2 cache or L2 cache. This level 2 cache may be inside the
CPU or outside the CPU. All the cores of a CPU can have their own separate level 2 cache, or
they can share one L2 cache among themselves. In case it is outside the CPU, it is connected
with the CPU with a very high-speed bus. The memory size of this cache is in the range of 256
KB to the 512 KB. In terms of speed, they are slower than the L1 cache.
L3: It is known as Level 3 cache or L3 cache. This cache is not present in all the processors;
some high-end processors may have this type of cache. This cache is used to enhance the
performance of Level 1 and Level 2 cache. It is located outside the CPU and is shared by all
the cores of a CPU. Its memory size ranges from 1 MB to 8 MB. Although it is slower than L1
and L2 cache, it is faster than Random Access Memory (RAM).
If data is not available in any of the cache memories, it looks inside the Random Access
Memory (RAM). If RAM also does not have the data, then it will get that data from the Hard
Disk Drive.
So, when a computer is started for the first time, or an application is opened for the first time,
data is not available in cache memory or in RAM. In this case, the CPU gets the data directly
from the hard disk drive. Thereafter, when you start your computer or open an application,
CPU can get that data from cache memory or RAM.
Primary Memory
Primary Memory is of two types: RAM and ROM.
It is a volatile memory. It means it does not store data or instructions permanently. When you
switch on the computer the data and instructions from the hard disk are stored in RAM.
CPU utilizes this data to perform the required tasks. As soon as you shut down the computer
the RAM loses all the data.
It is a non-volatile memory. It means it does not lose its data or programs that are written on it
at the time of manufacture. So it is a permanent memory that contains all important data and
instructions needed to perform important tasks like the boot process.
Secondary Memory
The secondary storage devices which are built into the computer or connected to the computer
are known as a secondary memory of the computer. It is also known as external memory or
auxiliary storage.
1) Hard Disk: It is a rigid magnetic disc that is used to store data. It permanently stores
data and is located within a drive unit.
The hard disk is also known as a hard drive. It is a rigid magnetic disc that stores data
permanently, as it is a non-volatile storage device. The hard disk is located within a drive unit
on the computer's motherboard and comprises one or more platters packed in an air-sealed
casing. The data is written on the platters by moving a magnetic head over the platters as they
spin. The data stored on a computer's hard drive generally includes the operating system,
installed software, and the user's files and programs, including pictures, music, videos, text
documents, etc.
2) Solid-state Drive:
SSD (Solid State Drive) is also a non-volatile storage medium that is used to hold and access
data. Unlike a hard drive, it does not have moving components, so it offers many advantages
over SSD, such as faster access time, noiseless operation, less power consumption, and more.
As the cost of SSD has come down, it has become an ideal replacement for a standard hard
drive in desktop and laptop computers. It is also suitable for notebooks, and tablets that don't
require lots of storage.
3) Pen drive:
Pen drive is a compact secondary storage device. It is also known as a USB flash drive, thumb
drive or a jump drive. It connects to a computer via a USB port. It is commonly used to store
and transfer data between computers. For example, you can write a report using a computer
and then copy or transfer it in the pen drive. Later, you can connect this pen drive to a computer
to see or edit your report. You can also store your important documents and pictures, music,
videos in the pen drive and keep it at a safe place.
Pen drive does not have movable parts; it comprises an integrated circuit memory chip that
stores the data. This chip is housed inside a plastic or aluminium casing. The data storage
capacity of the pen drive generally ranges from 2 GB to 128 GB. Furthermore, it is a plug and
play device as you don't need additional drives, software, or hardware to use it.
4) SD Card:
SD Card stands for Secure Digital Card. It is most often used in portable and mobile devices
such as smartphones and digital cameras. You can remove it from your device and see the
things stored in it using a computer with a card reader.
There are many memory chips inside the SD card that store the data; it does not have moving
parts. SD cards are not created equal, so they may differ from each other in terms of speed,
physical sizes, and capacity. For example, standard SD cards, mini SD cards, and micro SD
cards.
In the beginning, it was used for storing and playing sound recordings, later it was used for
various purposes such as for storing documents, audio files, videos, and other data like software
programs in a CD.
A standard CD is around 5 inches in diameter and 0.05 inches in thickness. It is made of a clear
polycarbonate plastic substrate, a reflective metallic layer, and a clear coating of acrylic plastic.
These thin circular layers are attached one on top of another as described below:
oA polycarbonate disc layer at the bottom has the data encoded by creating lands and
pits.
o The polycarbonate disc layer is coated with a thin aluminium layer that reflects the
laser.
o The reflective aluminium layer is coated with a lacquer layer to prevent oxidation in
order to protect the below layers. It is generally spin coated directly on the top of the
reflective layer.
o The label print is applied on the lacquer layer, or artwork is screen printed on the top of
the disc on the lacquer layer by offset printing or screen printing.
How Does a CD Work?
The data or information is stored or recorded or encoded in CD digitally using a laser beam
that etches tiny indentations or bumps on its surface. The bump is called a pit, which represents
the number 0. Space, where the bump is not created, is called land, and it represents the number
1. Thus, the data is encoded into a compact disc by creating pits (0) and lands (1). The CD
players use laser technology to read the optically recorded data.
6) DVD:
DVD is short for digital versatile disc or digital video disc. It is a type of optical media used
for storing optical data. Although it has the same size as a CD, its storage capacity is much
more than a CD. So, it is widely used for storing and viewing movies and to distribute software
programs as they are too large to fit on a CD. DVD was co-developed by Sony, Panasonic,
Philips, and Toshiba in 1995.
Types of DVDs:
DVDs can be divided into three main categories which are as follows:
o DVD-ROM (Read-Only): These types of DVDs come with media already recorded
on them, such as movie dvds. As the name suggests, data on these discs cannot be erased
or added, so these discs are known as a read-only or non-writable DVD.
o DVD-R (Writable): It allows you to record or write information to the DVD. However,
you can write information only once as it becomes a read-only DVD once it is full.
o DVD-RW (Rewritable or Erasable): This type of discs can be erased, written, or
recorded multiple times.
Memory Units
Memory units are used to measure and represent data. Some of the commonly used memory
units are:
1) Bit: The computer memory units start from bit. A bit is the smallest memory unit to measure
data stored in main memory and storage devices. A bit can have only one binary value out of
0 and 1.
2) Byte: It is the fundamental unit to measure data. It contains 8 bits or is equal to 8 bits. Thus
a byte can represent 2*8 or 256 values.
• Sign (S)
• Zero (Z)
• Auxiliary Carry (AC)
• Parity (P)
• Carry (C)
Instruction register and decoder
It is an 8-bit register. When an instruction is fetched from memory then it is stored in the
Instruction register. Instruction decoder decodes the information present in the Instruction
register.
Result −
14H + 89H = 9DH
The program code can be written like this −
LXI H 3005H : "HL points 3005H"
MOV A, M : "Getting first operand"
INX H : "HL points 3006H"
ADD M : "Add second operand"
INX H : "HL points 3007H"
MOV M, A : "Store result at 3007H"
HLT : "Exit program"
What is Pseudocode?
Pseudocode is an artificial and informal language that helps programmers in developing
algorithms. It is basically a “text-based” detail (algorithmic) design tool.
Algorithm and Program Example:
So here I have an example algorithm as well as a C++ program that is not a complete program
is just a function.
Algorithm.
The Algorithm is for finding the average of the list of elements. That is, we have a collection
of elements and we want to find out the average. First, we assign 0 to Sum. Then for each
element x in the list, we begin sum assigned sum+ x i.e. adding each value of x into the sum
variable. Then after that, the average is assigned sum by the number of elements, and then,
return the average. So, if you read the above algorithm, you can understand how to find the
average of a list of elements. Add all of them and divide by the number of elements. That’s it.
This is how we write our algorithm using pseudocode.
Examples-1:
Algorithm that compares two numbers and prints either the message identifying the greater
number or the message stating that both numbers are equal
1. START
2. PRINT “ENTER TWO NUMBERS”
3. INPUT A, B
4. IF A > B THEN PRINT “A IS GREATER THAN B”
5. IF B > A THEN PRINT “B IS GREATER THAN A”
6. IF A = B THEN PRINT “BOTH ARE EQUAL”
7. STOP
Examples-2: Algorithm to check whether a number given by the user is odd
or even
1. START
2. PRINT “ENTER THE NUMBER”
3. INPUT N
4. Q ← N/2 (Integer division)
5. R ←N – Q * 2
6. IF R = 0 THEN PRINT “N IS EVEN”
7. IF R != 0 THEN PRINT “N IS ODD”
8. STOP
1. START
2. PRINT “ENTER THREE NUMBERS”
3. INPUT A, B, C
4. IF A >= B AND B >= C
5. THEN PRINT A
6. IF B >= C AND C >= A
7. THEN PRINT B
8. ELSE
9. PRINT C
10. STOP
Examples-4: Algorithm uses a variable MAX to store the largest number
1. START
2. PRINT “ENTER THREE NUMBERS”
3. INPUT A, B, C
4. MAX ← A
5. IF B > MAX THEN MAX ← B
6. IF C > MAX THEN MAX ← C
7. PRINT MAX
8. STOP
1. START
2. PRINT “ENTER THREE NUMBERS”
3. INPUT A, B, C
4. IF A > B THEN
5. IF A > C THEN
6. PRINT A
7. ELSE
8. PRINT C
9. ELSE IF B > C THEN
10. PRINT B
11. ELSE
12. PRINT C
13. STOP
Example-6: Take three sides of a triangle as input and check whether the
triangle can be drawn or not.
1. START
2. PRINT “ENTER LENGTH OF THREE SIDES OF A TRIANGLE”
3. INPUT A, B, C
4. IF A + B > C AND B + C > A AND A + C > B
5. THEN
6. PRINT “TRIANGLE CAN BE DRAWN”
7. ELSE
8. PRINT “TRIANGLE CANNOT
9. BE DRAWN”: GO TO 5
10. STOP
Example-7:
1. START
2. PRINT “ENTER THE OBTAINED PERCENTAGE MARKS”
3. INPUT N
4. IF N > 0 AND N <= 50 THEN PRINT “F”
5. IF N > 50 AND N <= 60 THEN PRINT “C”
6. IF N > 60 AND N <= 70 THEN PRINT “B”
7. IF N > 70 AND N <= 80 THEN PRINT “A”
8. IF N > 80 AND N <= 90 THEN PRINT “E”
9. IF N > 90 AND N <= 100 THEN PRINT “O”
10. STOP
1. START
2. C ← 1
3. WHILE C <= 5
4. BEGIN
5. PRINT C
6. C ← C + 1
7. END
8. STOP
1. START
2. PRINT “HOW MANY NUMBERS?”
3. INPUT N
4. S← 0
5. C ←1
6. PRINT “ENTER NUMBER”
7. INPUT A
8. S← S + A
9. C ←C + 1
10. IF C <= N THEN GOTO 6
11. PRINT S
12. STOP
Program.
Now the same thing for finding the average list of elements, we have written the program using
C++ language. It’s a function, it’s not a complete program, just a function inside a program. If
we don’t use a semicolon to end the statement, it’s an error, and instead of assignment if we
write less than or a hyphen symbol, then also it is an error. So, if you want to store the value
then you must use an equal symbol and that is called an assignment.
So, it means you should follow the proper syntax of a language. Because this is not for you.
You are writing the program for the compiler to understand and convert it into machine code.
You will write a C++ program and that gets converted into machine code or machine language.
So, you are actually talking to the compiler. You should talk in such a way that you can easily
understand.
If the compiler is not understanding your program, then the compiler cannot convert your
program into machine code. So, you should follow the syntax perfectly. That is the reason you
have to put some little extra effort into learning programming.
What is a Flowchart?
A flowchart is used for showing the flow of control in a program and the sequence of steps
involved in a hierarchical manner. It is basically a diagrammatic representation of an algorithm,
workflow, or process.
So, if a program is very big then it is very difficult to figure out how the flow of the program
is, Flow charts are useful for understanding the program, instead of one is reading the program
and understanding it, he can see the flow chart and understand how the program is working.
It is just like if you talk about electrical wiring in a home. Then from where the wires or the
cables are moving through the walls. If you have a plan then you can know where exactly they
are flowing and where the important points are, everything you can know. Otherwise, if there
is any problem with the wiring, then you have to dig the whole wall to find out the problem. If
there is a proper plan then you can understand. So before laying the wire or pulling the wires
we will make a plan. In the same way, before writing the program we make a flowchart. So
based on the flow chart we will write the program. This will help us to understand the program.
Use of Flowchart
Flowcharts were highly used at the times of Monolithic Programming. Then later on when the
concept of Procedural Programming came into practice, the usage of flowcharts was a little
reduced.
Steps in the flowchart:
Usually, when we are using a flow chart for the program, it consists of three steps:
1. Input
2. Process
3. Output
We will call it like this. First, it takes some input. Then it will process. Then it will give the
output. So, any procedure you take will have similar steps. For example, preparing a dish. Input
is the ingredients. That process is the process of making a dish and the output is the dish ready.
If you take a chemistry experiment that is done usually in laboratories will have input means
chemicals and the vessels or instruments whatever you need. Then the process of what you will
do with that and then it gets done successfully. So, every procedure will have these 3 things
and the program is also used to look like this.
Elements of Flowchart:
Now let us look at the elements of the flow chart. The following image shows the different
elements of a flowchart.
Terminal: The oval symbol indicates Start, Stop and Halt in a program’s logic flow. A
pause/halt is generally used in programming logic under some error conditions. The terminal
is the first and last symbol in the flowchart.
Processing: A box represents arithmetic instructions. All arithmetic processes such as addition,
subtraction, multiplication, and division are indicated by the action/process symbol.
Now let us draw a few flow charts and try to understand idea of how flowcharts are used and
how they are useful for writing the programs.
Low-Level
Of all of the categories, it’s probably easiest to define what it means to be a low-level language.
Machine code is low level because it runs directly on the processor. Low-level languages are
appropriate for writing operating systems or firmware for micro-controllers. They can do just
about anything with a little bit of work, but obviously you wouldn’t want to write the next
major web framework in one of them (I can see it now, “Assembly on Rails”).
Characteristics
• Direct memory management
• Little-to-no abstraction from the hardware
• Register access
• Statements usually have an obvious correspondence with clock cycles
• Superb performance
C is actually a very interesting language in this category (more so C++) because of how broad
its range happens to be. C allows you direct access to registers and memory locations, but it
also has a number of constructs which allow significant abstraction from the hardware itself.
Really, C and C++ probably represent the most broad spectrum languages in existence, which
makes them quite interesting from a theoretical standpoint. In practice, both C and C++ are too
low-level to do anything “enterprisy”.
Mid-Level
This is where things start getting vague. Most high-level languages are well defined, as are
low-level languages, but mid-level languages tend to be a bit difficult to box. I really define
the category by the size of application I would be willing to write using a given language. I
would have no problem writing and maintaining a large desktop application in a mid-level
language (such as Java), whereas to do so in a low-level language (like Assembly) would lead
to unending pain.
This is really the level at which virtual machines start to become common-place. Java, Scala,
C# etc all use a virtual machine to provide an execution environment. Thus, many mid-level
languages don’t compile directly down to the metal (at least, not right away) but represent a
blurring between interpreted and compiled languages. Mid-level languages are almost always
defined in terms of low-level languages (e.g. the Java compiler is bootstrapped from C).
Characteristics
• High level abstractions such as objects (or functionals)
• Static typing
• Extremely commonplace (mid-level languages are by far the most widely used)
• Virtual machines
• Garbage collection
• Easy to reason about program flow
High-Level
High-level languages are really interesting if you think about it. They are essentially mid-level
languages which just take the concepts of abstraction and high-level constructs to the extreme.
For example, Java is mostly object-oriented, but it still relies on primitives which are
represented directly in memory. Ruby on the other hand is completely object-oriented. It has
no primitives (outside of the runtime implementation) and everything can be treated as an
object.
In short, high-level languages are the logical semantic evolution of mid-level languages. It
makes a lot of sense when you consider the philosophy of simplification and increase of
abstraction. After all, people were n times more productive switching from C to Java with all
of its abstractions. If that really was the case, then can’t we just add more and more layers of
abstraction to increase productivity exponentially?
High-level languages tend to be extremely dynamic. Runtime flow is changed on the fly
through the use of things like dynamic typing, open classes, etc. This sort of technique provides
a tremendous amount of flexibility in algorithm design. However, this sort of mucking about
with execution also tends to make the programs harder to reason about. It can be very difficult
to follow the flow of an algorithm written in Ruby. This “obfuscation of flow” is precisely why
I don’t think high-level languages like Ruby are suitable for large applications. That’s just my
opinion though.
Characteristics
• Interpreted
• Dynamic constructs (open classes, message-style methods, etc)
• Poor performance
• Concise code
• Flexible syntax (good for internal DSLs)
• Hybrid paradigm (object-oriented and functional)
• Fanatic community
int main() {
printf("Hello, world!\n");
return 0;
}
// Compile and link source file hello.c into executable a.exe (Windows) or a (Unixes)
>gcc hello.c
The default output executable is called "a.exe" (Windows) or "a.out" (Unixes and Mac OS X).
To run the program: