[go: up one dir, main page]

0% found this document useful (0 votes)
22 views58 pages

AOP Unit 1 Chapter 1

MCA CHAPTER 1 BANGALOE UNIVERSITY

Uploaded by

meghanar2910
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views58 pages

AOP Unit 1 Chapter 1

MCA CHAPTER 1 BANGALOE UNIVERSITY

Uploaded by

meghanar2910
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

THE ART OF COMPUTER

PROGRAMMING
Dr. Salini Suresh
Associate Professor ,DSCASC
Unit 1: Chapter 1-Introduction
Introduction:
•The Role of Algorithms in Computing
•Algorithms as a technology
•Analyzing algorithms
•Designing algorithms
•Growth of Functions
•Asymptotic notation
•Standard notations and common functions.
The Role of Algorithms in Computing
Algorithms
•A tool for solving a well-specified computational problem

Input Algorithm Output

•Algorithms must be:


• Correct: For each input produce an appropriate output
• Efficient: run as quickly as possible, and use as little memory as possible
The Role of Algorithms in Computing
Algorithms
Algorithm: Any well-defined computational procedure that takes some
value, or set of values, as input and produces some value, or set of values,
as output.
Or
Algorithm: A method of solving a problem, using a sequence of
well-defined steps
Example: Sorting problem
Input: A sequence of n numbers
Output: A permutation of the input sequence such that
The Role of Algorithms in Computing
Correct and incorrect algorithms
• Algorithm is correct if, for every input instance, it ends with the correct
output.
• We say that a correct algorithm solves the given computational problem.

• An incorrect algorithm might not end at all on some input instances, or it


might end with an answer other than the desired one.

• We shall be concerned only with correct algorithms.

5
The Problem-solving Process

Analysis
Problem
specification
Design

Algorithm
Implementation
Program

Compilation
Executable
(solution)
6
From Algorithms to Programs
Problem
Algorithm: A sequence
of instructions describing
how to do a task (or
process)

C++ Program
7
Practical Examples
• Internet and Networks
The need to access large amount of information with the shortest time.
Problems of finding the best routes for the data to travel.
Algorithms for searching this large amount of data to quickly find the pages on which
particular information resides.

• Electronic Commerce
The ability of keeping the information (credit card numbers, passwords, bank
statements) private, safe, and secure.
Algorithms involves encryption/decryption techniques.

8
Hard problems
• We can identify the Efficiency of an algorithm from its speed (how
long does the algorithm take to produce the result).
• Some problems have unknown efficient solution.
• These problems are called NP-complete problems.
NP (nondeterministic polynomial time) is a complexity class used to classify decision
problems
NP complete problems are problems whose status is unknown. No polynomial time
algorithm has yet been discovered for any NP complete problem
• If we can show that the problem is NP-complete, we can spend our
time developing an efficient algorithm that gives a good, but not the
best possible solution.

9
Components of an Algorithm
• Variables and values
• Instructions
• Sequences
• A series of instructions
• Procedures
• A named sequence of instructions
• we also use the following words to refer to a “Procedure” :
• Sub-routine
• Module
• Function

10
Components of an Algorithm Cont.
• Selections
• An instruction that decides which of two possible sequences is executed
• The decision is based on true/false condition
• Repetitions
• Also known as iteration or loop
• Documentation
• Records what the algorithm does

11
A Simple Algorithm
• INPUT: a sequence of n numbers
• T is an array of n elements
• T[1], T[2], …, T[n]
• OUTPUT: the smallest number among them
min = T[1]
for i = 2 to n do
{
if T[i] < min
min = T[i]
}
Output min
• Performance of this algorithm is a function of n

12
Algorithms as a Technology

Efficiency:
• Different algorithms solve the same problem often differ noticeably in
their efficiency
• These differences can be much more significant than difference due
to hardware and software
Algorithms as a Technology
Algorithm Efficiency
Consider two sort algorithms
• Insertion sort
• roughly takes time equal to c1n2 to sort n items
• where c1 is a constant that does not depends on n
• it takes time roughly proportional to n2
• Merge Sort
• roughly takes time equal to takes c2 n lg(n) to sort n items
• where c2 is also a constant that does not depends on n
• lg(n) stands for log2 (n)
• it takes time roughly proportional to n lg(n)
• Insertion sort usually has a smaller constant factor than merge sort
so that, c1 < c2
Algorithms as a Technology
Algorithm Efficiency-Example
• Merge sort is faster than insertion sort for large input sizes
Consider now:
• A faster computer A running insertion sort against
• A slower computer B running merge sort
• Both must sort an array of one million numbers
Suppose
• Computer A execute one billion (109) instructions per second
• Computer B execute ten million (107) instructions per second
• So computer A is 100 times faster than computer B
Assume that
c1 = 2 and c2 = 50
Algorithms as a Technology
Algorithm Efficiency-Example
• To sort one million numbers
• Computer A takes
2 . (106)2 instructions
109 instructions/second
= 2000 seconds
• Computer B takes
50 . 106 . lg(106) instructions
107 instructions/second
≈ 100 seconds

• By using algorithm whose running time grows more slowly, Computer B runs 20
times faster than Computer A
• For ten million numbers
• Insertion sort takes ≈ 2.3 days
• Merge sort takes ≈ 20 minutes
Analyzing algorithms
Analysis of Algorithms or performance analysis refer to
the task of determining how much computing time & storage an algorithm
requires
• Analyzing an algorithm means predicting the resources that the
algorithm requires.
• Resources such as memory, communication bandwidth, or computer
hardware are of primary concern,
• but most often it is computational time that we want to measure.
Analyzing algorithms
Complexity of Algorithms
We can determine the efficiency of an algorithm by calculating its
performance.
Following are the two factors that help us to determine the efficiency of an
algorithm:
1. Total time required by an algorithm to execute
2. Total space required by an algorithm to execute.
Thus, the two main considerations required to analyze the algorithm are:
• 1. Time complexity
• 2. Space complexity
Analyzing algorithms
Complexity of Algorithms
Algorithm analysis involves calculating the complexity of algorithms,
• Complexity of an algorithm is a measure of the amount of time and/or
space required by an algorithm for an input of a given size (n).
1.Time Complexity: It is a function that describes the time taken by
an algorithm to solve a problem.
The time complexity of an algorithm is the amount of computer
time it needs to run to completion. Big-O notation is used to express
the time complexity of an algorithm
2.Space Complexity: It is a function that describes the amount of
memory or space required by an algorithm to run. A good algorithm
has minimum number of space complexity.
Analyzing algorithms
Complexity of Algorithms
• The complexity of an algorithm is the function f(n) which gives the running
time and/or storage space requirement of the algorithm in terms of the size
‘n’ of the input data.
• Mostly, the storage space required by an algorithm is simply a multiple of
the data size ‘n’.

• The function f(n), gives the running time of an algorithm, depends not only
on the size ‘n’ of the input data but also on the particular data.
Analyzing algorithms
Time Complexity –Example
• In step A, there is one independent statement ‘x=
x+1’ and it is not within any loop.
• Hence, this statement will be executed only once.
Thus, the frequency count of step A of the algorithm
is 1.
• In step B, there are three statements out of which ‘x
= x+1’ is an important statement. As the statement ‘x
= x+1’ is contained within the loop, the statement
will be executed n number of times.
Thus, the frequency count of step B in algorithm
is n.
• In step C, the inner and outer loop runs
2
n number of
times. Thus, the frequency count is n
• Total =1+n+n2

During the analysis of algorithm, the focus is on determining those statements


that provide the greatest frequency count.
Analyzing algorithms
BEST, WORST, AND AVERAGE-CASE COMPLEXITY
For a given input :
• In best case time complexity, an algorithm will take minimum amount of
time to solve a particular problem. In other words, the algorithm runs
for a short time.
Eg: Bubble sort has a best case time complexity of n.
• In worst case time complexity, an algorithm will take maximum amount
of time to solve a particular problem. In other words, algorithm runs for
a long time.
• Eg: Quicksort has a worst case time complexity of n2.
• In average case time complexity, specifies the behavior of an algorithm
on a particular input.
Eg: Quicksort has an average case time complexity of n * log(n).
Algorithm Analysis
Types of Analysis
The efficiency of some algorithms may vary for inputs of the same size.
For such algorithms, we need to differentiate between the worst case, average case and best case efficiencies.
Best Case Analysis
If an algorithm takes the least amount of time to execute a specific set of input, then it is called the best case time
complexity.
The best case efficiency of an algorithm is the efficiency for the best case input of size n.
• Because of this input, the algorithm runs the fastest among all the possible inputs of the same size.
• Best case does not mean the smallest input. It means the input of size n for which the
algorithm runs the fastest.
To analyze the best case efficiency, we have to first determine the kind of inputs for which the count
C(n) will be the smallest among all possible inputs of size n.

Eg: In case of sequential search,


the best case for lists of size n is when their first element is equal to the search key. Then,
Cbest (n) = 1
Algorithm Analysis
Types of Analysis
Worst Case Analysis
• If an algorithm takes maximum amount of time to execute for a specific set of
input, then it is called the worst case time complexity.
• The worst case efficiency of an algorithm is the efficiency for the worst case
input of size n.
• The algorithm runs the longest among all the possible inputs of the similar
size because of this input of size n.
Eg: In sequential search,
if the search element key is present at the nth position of the list, then the
basic operations and time required to execute the algorithm is more.
Thus, it gives the worst case time complexity represented as:
Cworst(n)=n
Algorithm Analysis
Types of Analysis
Average Case Analysis
• If the time complexity of an algorithm for certain sets of inputs are on
an average, then such a time complexity is called average case time
complexity.
• Average case analysis provides necessary information about an
algorithm’s behavior on a typical or random input.
Average Case Analysis –example
In sequential search,
• the probability of successful search is
equal to t i.e. 0 ≤ t ≤ 1,
• and the probability of the first match
occurring in the ith position of the list
is the same for all values of i.
In case of successful search, the
probability of the first match
occurring in the ith position of the list is
for all values of i
and the comparison made by the
algorithm is also i.
In case of unsuccessful search,
probability of first match is (1 - t) and
number of comparison is n
Asymptotic analysis

• Asymptotic analysis refers to computing the running time of any operation

• This approach is based on the asymptotic complexity measure.


• It does not count the exact number of steps of a program,
• but how that number grows with the size of the input to the program.
Asymptotic Notations
A problem may have various algorithmic solutions.
In order to choose the best algorithm for a particular process, you must be able
to judge the time taken to run two solutions, and choose the better among the
solutions .
To select the best algorithm, it is necessary to check the efficiency of each
algorithm.
The efficiency of each algorithm can be checked by computing its time
complexity.
The asymptotic notations help to represent the time complexity in a shorthand
way.
It can generally be represented as the fastest possible, slowest possible or
average possible.
Asymptotic Notations

Following are the commonly used asymptotic notations to calculate the running
time complexity of an algorithm.
• Ο Notation (Big Oh)
• Ω Notation (Omega)
• θ Notation (Theta)
Big Oh Notation, Ο
• The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time.

• It measures the worst case time complexity or

• This notation helps in calculating the maximum amount of time taken


by an algorithm to compute a problem.
• Big O Notation express the runtime in terms of — how quickly it grows
relative to the input, as the input gets larger.
Big Oh Notation-Common bounds-eg
• O(n), specifically, describes an algorithm with linear time complexity.
That is, when passed 5 arguments, it will take 5 times as long as when
passed 1 argument.
• O(n2), means , when passed 5 arguments will take 52 (25) times longer
than when passed a single argument.
Big Oh Notation, Ο
Big-O is defined as: f(n) ≤ c ∗ g(n) where,
• where, n can be any number of inputs or outputs
• and f(n) as well as g(n) are two non-negative functions.
f(n) is a function that characterizes the actual running time of the algorithm
g(n) is a function that characterizes an upper bounds on f(N).
c is constant c and n is non negative number

The Big-O can also be denoted as f(n) = O(g(n)),


• where f(n) and g(n) are two non-negative functions and f(n) < c.g(n)
• if g(n) is multiple of some constant c.
Omega Notation
‘Ω’ is the representation for Omega notation.
Omega describes the manner in which an algorithm performs in the best case
time complexity.
This notation provides the minimum amount of time taken by an algorithm to
compute a problem.
omega gives the "lower bound" of the algorithm's run-time.
Omega is defined as: f(n) ≥ c ∗ g(n)
Where, n is any number of inputs or outputs and f(n) and g(n) are two
non-negative functions.
Omega can also be denoted as f(n) = Ώ (g(n))
Theta θ Notation
Theta notation is used when the upper bound and lower bound of an algorithm
are in the same order of magnitude.
Theta can be defined as: c1 ∗ g(n) ≤ f(n) ≤ c2 ∗ g(n) for all n>n0

• Where, n is any number of inputs or outputs and f(n) and g(n) are two
non-negative functions and positive constant c1 and c2

Theta can also be denoted as f(n) = θ(g(n)).


The Growth of Functions and Big-O Notation
• By asymptotic growth we mean the growth of the function as the input
variable n gets arbitrarily large

• Two functions f(n) and g(n) are said to have the same asymptotic growth
provided their growths differ by some positive constant factor c > 0.
• In this case we may say f(n) = O(g(n)) or, g(n) = O(f(n)).
Some common bounds(smallest to largest)
The following table shows the most common kinds
of growth that are used within big-O notation.
Big Oh –graphical representation

f(n)≤c.g(n) for all n≥no

This implies that f(n) does not grow


faster than g(n),
or g(n) is an upper bound on the
function f(n)
Example • We know that for any value of n, it will satisfy the above
condition,
f(n)=2n+3 , g(n)=n i.e., 2n+3<=c.n.
Now, we have to find Is f(n)=O(g(n))? If the value of c is equal to 5, then it will satisfy the condition
To check f(n)=O(g(n)), it must satisfy so f(n) = O(g(n)) or we can say that f(n) grows linearly.
the given condition: Therefore, it concludes that c.g(n) is the upper bound of the f(n).
f(n)<=c.g(n) It can be represented graphically as:
First, we will replace f(n) by 2n+3 and
g(n) by n.
2n+3 <= c.n
Let's assume c=5, n=1 then
2*1+3<=5*1
5<=5
For n=1, the above condition is true.
If n=2
2*2+3<=5*2
7<=10
For n=2, the above condition is true.
Omega Notation- graphical representation
Theta θ Notation-graphical representation
Standard notations and common functions
Monotonicity
• A function's increasing or decreasing tendency is called monotonicity on its
domain.
• A function is called monotonically increasing ,if for all x and y such that x≤y
,f(x) ≤ f(y)
• A function is called monotonically decreasing ,if for all x and y such that x≤y
,f(x)≥ f(y)
Monotonicity

A monotonically decreasing function


A monotonically increasing function.
Monotonicity
Floor and Ceiling Functions
• Floor function is represented as floor(x).
• Floor function gives the largest integer less than or equal to x.
Eg:
floor(1.01)=1
floor(0)=0
floor(2.9)=2
floor(-3)=-3
floor(-1.1)=-2
Floor and Ceiling Functions
• Ceiling function is represented as ceiling(x).

• It gives the smallest integer value greater than or equal to x.


Eg:
ceiling (1.5)=2
ceiling(0)=0
ceiling(2)=2
ceiling(-3)=-3
ceiling(-1.1)=-1
Summation Symbol
Summation symbol is Σ.
Summation is the operation of combining a sequence of numbers
using addition.
The result is the sum or total of all the numbers.
Apart from numbers, other types of values such as, vectors, matrices,
polynomials, and elements of any additive group can also be added
using summation symbol.
Summation-example • Here, a is the first index and b is
Consider a sequence x1, x2, the last index.
x3……x10. • The variables are the numbers
The simple addition of this sequence that appear constantly in all
is terms.
x1+x2+x3+x4+x5+x6+x7+x8+x9+x10. • In the expression,
Using mathematical notation we can • x1+x2+x3+x4+x5+x6+x7+x8+x9+x
shorten the addition. 10
Then, the expression will be • 1 is the first index, 10 is the last
x1+x2+x3+…..+x10. index, and x is the variable.
We can also represent the expression • If we use i as the index variable
using Greek letter Σ as shown below: then the expression will be
Exponent and Logarithm
Exponential function has the form f(x) =ax+B
where, a is the base, x is the exponent, and B is any expression.
If a is positive, the function continuously increases in value.
As x Increases, the slope of the function also increases.

In the graph as x increases, y also increases, and as x increases


the slope of the graph also increases.
Exponential function
logarithms
A logarithm is an exponent.
The logarithmic function is defined as f(x)= logb x.
Here, the base of the algorithm is b.
logarithms
Factorial
The symbol of the factorial function is ‘!’.
The factorial function multiplies a series of natural numbers that are in
descending order.
The factorial of a positive integer n which is denoted by n! represents
the product of all the positive integers less than or equal to n.
n!=n*(n-1)*(n-2)……2*1
Eg:
5!=5*4*3*2*1=120
Fibonacci Numbers
In the Fibonacci sequence, after the first two numbers i.e. 0 and 1 in the
sequence, each subsequent number in the series is equal to the sum of the
previous two numbers.
.
Fibonacci numbers are the elements of Fibonacci sequence:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181,
6765…….
Sometimes this sequence is given as 0, 1, 1, 2, 3, 5….. There are also other
Fibonacci sequences which start with the numbers:
3, 10, 13, 23, 36, 59…….
In mathematical terms,
the sequence Fn of Fibonacci numbers is defined as:
Fn = Fn-1+ F n-2
Designing Algorithms
1. Divide and Conquer Approach:
It is a top-down approach.
The algorithms which follow the divide & conquer techniques involve
three steps:
• Divide the original problem into a set of subproblems.
• Solve every subproblem individually, recursively.
• Combine the solution of the subproblems (top level) into a solution of
the whole original problem.
Standard algorithms that follows Divide and
Conquer algorithm.
• Binary Search is a searching algorithm.
• In each step, the algorithm compares the input element x with the value of the middle
element in array.
• If the values match, return the index of the middle.
• Otherwise, if x is less than the middle element, then the algorithm recurs for left side of
the middle element, else recurs for the right side of the middle element.
• Quicksort is a sorting algorithm.
• The algorithm picks a pivot element,
• rearranges the array elements in such a way that all elements smaller than the picked
pivot element move to left side of pivot,
• and all greater elements move to right side.
• Finally, the algorithm recursively sorts the subarrays on left and right of pivot element.
• Merge Sort is also a sorting algorithm.
• The algorithm divides the array in two halves,
• recursively sorts them and finally merges the two sorted halves.
Designing Algorithms
2.Greedy Technique:
Greedy method is used to solve the optimization problem.
An optimization problem is one in which
For set of input values, which are required either to be maximized or minimized (known as objective),
i.e. some constraints or conditions.

Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.
• Eg: selection sort is an example of greedy algorithm
Designing Algorithms
3. Dynamic Programming is a bottom-up approach
we solve all possible small problems and then combine them to obtain
solutions for bigger problems.
• This is particularly helpful when the number of copying subproblems
is exponentially large.
• Dynamic Programming is frequently related to Optimization
Problems
• Insertion sort is an example of dynamic programming

You might also like