AOP Unit 1 Chapter 1
AOP Unit 1 Chapter 1
PROGRAMMING
Dr. Salini Suresh
Associate Professor ,DSCASC
Unit 1: Chapter 1-Introduction
Introduction:
•The Role of Algorithms in Computing
•Algorithms as a technology
•Analyzing algorithms
•Designing algorithms
•Growth of Functions
•Asymptotic notation
•Standard notations and common functions.
The Role of Algorithms in Computing
Algorithms
•A tool for solving a well-specified computational problem
5
The Problem-solving Process
Analysis
Problem
specification
Design
Algorithm
Implementation
Program
Compilation
Executable
(solution)
6
From Algorithms to Programs
Problem
Algorithm: A sequence
of instructions describing
how to do a task (or
process)
C++ Program
7
Practical Examples
• Internet and Networks
The need to access large amount of information with the shortest time.
Problems of finding the best routes for the data to travel.
Algorithms for searching this large amount of data to quickly find the pages on which
particular information resides.
• Electronic Commerce
The ability of keeping the information (credit card numbers, passwords, bank
statements) private, safe, and secure.
Algorithms involves encryption/decryption techniques.
8
Hard problems
• We can identify the Efficiency of an algorithm from its speed (how
long does the algorithm take to produce the result).
• Some problems have unknown efficient solution.
• These problems are called NP-complete problems.
NP (nondeterministic polynomial time) is a complexity class used to classify decision
problems
NP complete problems are problems whose status is unknown. No polynomial time
algorithm has yet been discovered for any NP complete problem
• If we can show that the problem is NP-complete, we can spend our
time developing an efficient algorithm that gives a good, but not the
best possible solution.
9
Components of an Algorithm
• Variables and values
• Instructions
• Sequences
• A series of instructions
• Procedures
• A named sequence of instructions
• we also use the following words to refer to a “Procedure” :
• Sub-routine
• Module
• Function
10
Components of an Algorithm Cont.
• Selections
• An instruction that decides which of two possible sequences is executed
• The decision is based on true/false condition
• Repetitions
• Also known as iteration or loop
• Documentation
• Records what the algorithm does
11
A Simple Algorithm
• INPUT: a sequence of n numbers
• T is an array of n elements
• T[1], T[2], …, T[n]
• OUTPUT: the smallest number among them
min = T[1]
for i = 2 to n do
{
if T[i] < min
min = T[i]
}
Output min
• Performance of this algorithm is a function of n
12
Algorithms as a Technology
Efficiency:
• Different algorithms solve the same problem often differ noticeably in
their efficiency
• These differences can be much more significant than difference due
to hardware and software
Algorithms as a Technology
Algorithm Efficiency
Consider two sort algorithms
• Insertion sort
• roughly takes time equal to c1n2 to sort n items
• where c1 is a constant that does not depends on n
• it takes time roughly proportional to n2
• Merge Sort
• roughly takes time equal to takes c2 n lg(n) to sort n items
• where c2 is also a constant that does not depends on n
• lg(n) stands for log2 (n)
• it takes time roughly proportional to n lg(n)
• Insertion sort usually has a smaller constant factor than merge sort
so that, c1 < c2
Algorithms as a Technology
Algorithm Efficiency-Example
• Merge sort is faster than insertion sort for large input sizes
Consider now:
• A faster computer A running insertion sort against
• A slower computer B running merge sort
• Both must sort an array of one million numbers
Suppose
• Computer A execute one billion (109) instructions per second
• Computer B execute ten million (107) instructions per second
• So computer A is 100 times faster than computer B
Assume that
c1 = 2 and c2 = 50
Algorithms as a Technology
Algorithm Efficiency-Example
• To sort one million numbers
• Computer A takes
2 . (106)2 instructions
109 instructions/second
= 2000 seconds
• Computer B takes
50 . 106 . lg(106) instructions
107 instructions/second
≈ 100 seconds
• By using algorithm whose running time grows more slowly, Computer B runs 20
times faster than Computer A
• For ten million numbers
• Insertion sort takes ≈ 2.3 days
• Merge sort takes ≈ 20 minutes
Analyzing algorithms
Analysis of Algorithms or performance analysis refer to
the task of determining how much computing time & storage an algorithm
requires
• Analyzing an algorithm means predicting the resources that the
algorithm requires.
• Resources such as memory, communication bandwidth, or computer
hardware are of primary concern,
• but most often it is computational time that we want to measure.
Analyzing algorithms
Complexity of Algorithms
We can determine the efficiency of an algorithm by calculating its
performance.
Following are the two factors that help us to determine the efficiency of an
algorithm:
1. Total time required by an algorithm to execute
2. Total space required by an algorithm to execute.
Thus, the two main considerations required to analyze the algorithm are:
• 1. Time complexity
• 2. Space complexity
Analyzing algorithms
Complexity of Algorithms
Algorithm analysis involves calculating the complexity of algorithms,
• Complexity of an algorithm is a measure of the amount of time and/or
space required by an algorithm for an input of a given size (n).
1.Time Complexity: It is a function that describes the time taken by
an algorithm to solve a problem.
The time complexity of an algorithm is the amount of computer
time it needs to run to completion. Big-O notation is used to express
the time complexity of an algorithm
2.Space Complexity: It is a function that describes the amount of
memory or space required by an algorithm to run. A good algorithm
has minimum number of space complexity.
Analyzing algorithms
Complexity of Algorithms
• The complexity of an algorithm is the function f(n) which gives the running
time and/or storage space requirement of the algorithm in terms of the size
‘n’ of the input data.
• Mostly, the storage space required by an algorithm is simply a multiple of
the data size ‘n’.
• The function f(n), gives the running time of an algorithm, depends not only
on the size ‘n’ of the input data but also on the particular data.
Analyzing algorithms
Time Complexity –Example
• In step A, there is one independent statement ‘x=
x+1’ and it is not within any loop.
• Hence, this statement will be executed only once.
Thus, the frequency count of step A of the algorithm
is 1.
• In step B, there are three statements out of which ‘x
= x+1’ is an important statement. As the statement ‘x
= x+1’ is contained within the loop, the statement
will be executed n number of times.
Thus, the frequency count of step B in algorithm
is n.
• In step C, the inner and outer loop runs
2
n number of
times. Thus, the frequency count is n
• Total =1+n+n2
Following are the commonly used asymptotic notations to calculate the running
time complexity of an algorithm.
• Ο Notation (Big Oh)
• Ω Notation (Omega)
• θ Notation (Theta)
Big Oh Notation, Ο
• The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time.
• Where, n is any number of inputs or outputs and f(n) and g(n) are two
non-negative functions and positive constant c1 and c2
• Two functions f(n) and g(n) are said to have the same asymptotic growth
provided their growths differ by some positive constant factor c > 0.
• In this case we may say f(n) = O(g(n)) or, g(n) = O(f(n)).
Some common bounds(smallest to largest)
The following table shows the most common kinds
of growth that are used within big-O notation.
Big Oh –graphical representation
Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.
• Eg: selection sort is an example of greedy algorithm
Designing Algorithms
3. Dynamic Programming is a bottom-up approach
we solve all possible small problems and then combine them to obtain
solutions for bigger problems.
• This is particularly helpful when the number of copying subproblems
is exponentially large.
• Dynamic Programming is frequently related to Optimization
Problems
• Insertion sort is an example of dynamic programming