Chapter 1
Chapter 1
1-2
What is an algorithm?
An algorithm is a sequence of unambiguous instructions
for solving a problem, i.e., for obtaining a required
output for any legitimate input in a finite amount of
time.
problem
algorithm
1-3
What is an algorithm?
1. Finiteness
❂ terminates after a finite number of steps
2. Definiteness
❂ rigorously and unambiguously specified
3. Input
❂ valid inputs are clearly specified
4. Output
❂ can be proved to produce the correct output given a valid
input
5. Effectiveness
❂ steps are sufficiently simple and basic
1-4
Historical Perspective
• Euclid’s algorithm for finding the greatest
common divisor
– third century BC
1-5
Euclid’s Algorithm
Problem: Find gcd(m,n), the greatest common divisor
of two nonnegative, not both zero integers m and n
Examples: gcd(60,24) = 12, gcd(60,0) = 60,
gcd(0,0) = ?
while n ≠ 0 do
r ← m mod n
m← n
n ← r
return m
1-7
Other methods for computing gcd(m,n)
Consecutive integer checking algorithm
Step 1 Assign the value of min{m,n} to t
Step 2 Divide m by t. If the remainder is 0, go to
Step 3; otherwise, go to Step 4
Step 3 Divide n by t. If the remainder is 0, return
t and stop; otherwise, go to Step 4
Step 4 Decrease t by 1 and go to Step 2
1-8
Consecutive integer checking
algorithm
find the gcd of (60,24)
step 1: t=24
step 2:60/24 not equls to 0
step 4 :so t=24-1 60/23
t=23-1 60/22
t=22-1 6o/21
t=21-1 60/20=0
step 3:24/20 not equals to 0
so steps continue.....
1-9
Other methods for gcd(m,n) [cont.]
Middle-school procedure
Step 1 Find the prime factorization of m
Step 2 Find the prime factorization of n
Step 3 Find all the common prime factors
Step 4 Compute the product of all the common
prime factors and return it as gcd(m,n)
Is this an algorithm?
1-10
Sieve of Eratosthenes
Input: Integer n ≥ 2
Output: List of primes less than or equal to n
for p ← 2 to n do A[p] ← p
for p ← 2 to ⎣n⎦ do
if A[p] ≠ 0 //p hasn’t been previously eliminated from
the list
j ← p* p
while j ≤ n do
A[j] ← 0 //mark element as eliminated
j ← j + p
Example: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20
1-11
Example 1
Design an algorithm to find all the common
elements in two sorted lists of numbers. E.g. for
the lists 2, 4, 4, 4, 7 and 1, 2, 2, 6, 4, 4, 7 the
output should be 2, 4, 4, 7
1-12
Example 1
int sizem , sizen ;
sizem=arr1.size();
sizen=arr2.size();
int m=0 , n=0 , element1, element2, index=0;
if (sizem<sizen||sizem==sizen)
int arr3[]= new int [sizen];
else
int arr3[]= new int [sizem];
1-14
Exercise 1
a) gcd(31415, 14142) = gcd(14142, 3131) = gcd(3131, 1618) =
gcd(1618, 1513) = gcd(1513, 105) = gcd(1513, 105) = gcd(105,
43) = gcd(43, 19) = gcd(19, 5) = gcd(5, 4) = gcd(4, 1) = gcd(1, 0)
= 1.
Hint for b. : The total number of moves always stays the same: It is equal to
m/ gcd(m, n), where m is the maximum of the initial numbers, which includes
two integers of the initial pair.
1-16
Exercise 2
a) Here is a nonrecursive version:
Algorithm Euclid2 (m, n)
//Computes gcd(m, n) by Euclid’s algorithm based on subtractions
//Input: Two nonnegative integers m and n not both equal to 0
//Output: The greatest common divisor of m and n
while n != 0 do
if m < n swap(m, n)
m ← m− n
return m
b) It is not too difficult to prove that the integers that can be written on the board are
the integers generated by the subtraction version of Euclid’s algorithm and only them.
Although the order in which they appear on the board may vary, their total number
always stays the same: It is equal to m/gcd(m, n), where m is the maximum of the
initial numbers, which includes two integers of the initial pair. Hence, the total number
of possible moves is m/gcd(m, n)−2. Consequently, if m/gcd(m, n) is odd, one should
choose to go first; if it is even, one should choose to go second.
Basic Issues Related to Algorithms
• How to design algorithms
• Proving correctness
• Optimality
1-18
1.2 Fundamentals of Algorithmic Problem Solving
1-19
Understand the problem
Design an algorithm
Prove correctness
20
What does it mean to understand the
problem?
• What are the problem objects?
• What are the operations applied to the objects?
21
Design an algorithm
• Build a computational model of the
solving process
Prove correctness
• Correct output for every legitimate
input in finite time
• Based on correct math formula
• By induction
22
Analyze the algorithm
Efficiency: time and space
Simplicity
Generality: range of inputs, special cases
Optimality:
no other algorithm can do better
Coding
How the objects and operations in the
algorithm are represented in the chosen
programming language?
23
Fundamentals of Algorithmic Problem Solving, continued: Algorithm
design techniques/strategies
1-24
Brute force
n A brute force algorithm simply tries all possibilities
until a satisfactory solution is found
n Such an algorithm can be:
n Optimizing: Find the best solution. This may require finding all
solutions, or if a value for the best solution is known, it may stop
when any best solution is found
1-25
Divide and conquer
• A divide and conquer algorithm consists of two
parts:
– Divide the problem into smaller subproblems of the same
type, and solve these subproblems recursively
– Combine the solutions to the subproblems into a solution
to the original problem
• Traditionally, an algorithm is only called “divide and
conquer” if it contains at least two recursive calls
1-26
Decrease and conquer,Transform
and conquer
• Decrease and Conquer • Transform and
involves reducing the conquer involves
problem into 1 sub changing the problem
problem into something that is
much easier to solve, or
to an instance in which
you know the solution
to.
1-27
greedy algorithm
• A greedy algorithm works in phases: At each phase:
– You take the best you can get right now, without regard for
future consequences
– You hope that by choosing a local optimum at each step,
you will end up at a global optimum
1-28
Dynamic programming
• Dynamic programming algorithms are used for
optimization (for example, finding the shortest
path between two points, or the fastest way to
multiply many matrices). A dynamic
programming algorithm will examine the
previously solved subproblems and will
combine their solutions to give the best
solution for the given problem.
1-29
Dynamic programming - lcs
1-30
Iterative improvement
• Start with a feasible solution
• Repeat the following step until no improvement can
be found:
– change the current feasible solution to a feasible solution
with a better value of the objective function
• Return the last feasible solution as optimal
1-31
Exercise 3
1-32
Exercise 3
1-33
Exercise 4
Consider the following algorithm for finding the distance between the two
closest elements in an array of numbers.
1-34
Exercise 4
The following improved version considers the same pair of elements only once
and avoids recomputing the same expression in the innermost loop:
Algorithm MinDistance2 (A[0..n − 1])
//Input: An array A[0..n − 1] of numbers
//Output: The minimum distance d between two of its elements
dmin ←∞
for i ← 0 to n − 2 do
for j ← i +1 to n − 1 do
temp ← |A[i] − A[j]|
if temp < dmin
dmin ← temp
return dmin
A faster algorithm is based on the idea of presorting
1-35
Basic steps in algorithms correctness
verification
• When an algorithm is designed it should be
analyzed at least from the following points of
view:
- Correctness. This means to verify if the
algorithm leads to the solution of the problem
- Efficiency. This means to establish the amount
of resources (memory space and processing
time) needed to execute the algorithm on a
machine
1-36
Basic steps in algorithms
correctness verification
• To verify if an algorithms really solves the
problem for which it is designed we can use
one of the following strategies:
- Experimental analysis (testing)
- Formal analysis (proving).
1-37
Basic steps in algorithms
correctness verification
• The main steps in the formal analysis of the
correctness of an algorithm are:
- Identification of the properties of input data (the
so-called problem's preconditions).
- Identification of the properties which must be
satisfied by the output data (the so-called
problem's postconditions).
- Proving that starting from the preconditions and
executing the actions specified in the algorithms
one obtains the postconditions.
1-38
Formal methods in correctness
verification
To prove the algorithms correctness one can
apply the following strategy:
- based on problem's pre-conditions and post-
conditions some assertions concerning the
algorithm state are constructed;
- for each processing state one proves that
from the previous assertion one arrive to the
next assertion.
1-39
Formal methods in correctness
verification
• Let us denote by P the problem's
preconditions, by A the algorithm and by Q
the problem's postconditions. The triple
<P,A,Q> corresponds to a correct algorithm (in
!
this situation we shall use the notation P →Q)
if the following statement holds:
For input data which satisfies the preconditions P the
algorithm will: (i) lead a result which satisfies the problem's
postconditions; (ii) stop after a finite number of processing
steps.
Formal methods in correctness
verification
• To verify the correctness of an algorithm we
can analyze the correctness of each processing
structure of the algorithm.
• For each processing structure (sequential,
conditional and looping) there exist rules
which make the verification easy.
Sequential statement rule.
Let A be the algorithm consisting of the
sequence of processing steps: (A1,A2,….,An). Let
us denote by Pi-1 and Pi the algorithm state
before and after executing the action Ai. The
sequential structure's rule can be stated as
follows:
Example 1
Let a and b be two distinct integers and x and y two
variables which contain the values a and b,
respectively. Swap the values of the two variables.
The problem's preconditions are {x = a, y = b} while
the postconditions are {x = b, y = a}.
consider the following processing steps
A1 : aux ←x
A2 : x ← y
A3 : y ← aux
Example 1
The assertions concerning the algorithm state
are the following: P = {x = a, y = b} (before A1),
P1 = {aux = a; x = a; y = b} (after A1), P2 = {aux =
a; x = b; y = b} (after A2), P3 = {aux = a; x = b; y =
a} (after A3).
!
P = P0, P3 =>Q and Pi-1 →Q for i = 1 to 3.
According to the rule of sequential structure it
follows that this algorithm is correct.
Example 2
Let us consider now the following sequence of
processing steps:
A1 : x ← y
A2 : y ← x
In this case the corresponding assertions are: {x
= b, y = b} (both after A1 and after A2). Thus, in
this case the steps A1 and A2 cannot lead to
postconditions.
Conditional statement rule.
consider the following conditional structure:
S: if c then A1 else A2 endif
If P and Q are the problem's preconditions and
postconditions respectively then the rule can be
stated as follows:
Example 1
Let us consider the problem of finding the minimum between
two distinct real values:
if a < b then A1 : m ← a { a < b, m = a}
else A2 : m ← b { a >= b, m = b}
endif
• The algorithm preconditions are a ∈ R, b ∈ R, a ≠b and the
postcondition is m = min{a, b}.
• The condition c is a < b. If a < b then m = a < b thus m =
min{a, b}.
• In the other case, a > b holds and the processing step A2
leads to m = b < a thus we obtain again that m = min{a, b}.
Loop statement rule
• Loops are a very common source of
programming errors because it is often difficult
to determine that a loop body execute exactly
the right number of times
• consider the following structure:
S: while c do A endwhile,
• a precondition P and a postcondition Q.
Loop statement rule
• The loop statement is correct if
(a) The assertion I is true at the beginning
(b) I is an invariant property: if I is true before executing A and c is
!
also true then I remains true after the execution of A (𝐼⋀𝑐 → 𝐼).
(c) At the end of the loop (when c becomes false) the postcondition Q
can be inferred from I. 𝐼 ∧ 𝑐̅ ⇒ 𝑄
(d) After each execution of A the value of t decreases (c ^ (t(p) = k)
!
→t(p + 1) < k, where p can be interpreted as a counting variable
associated to each loop execution).
(e) If c is true then t(p) ≥ 1 (at least one iteration remains to be
executed) and when t becomes 0 then the condition c becomes
false (meaning that the algorithm stops after a finite number of
iterations).
Example 1
Let us consider the problem of finding the
minimum in a finite sequence. We shall analyze
the following algorithm:
Example 1
• The precondition is n ≥ 1 and the postcondition is min ≤
a[i], i = 1 to n.
• The assertions corresponding to each processing step is min
≤a[j], j = 1 to (i-1). Indeed, after the execution of A2 and A4
the property is satisfied.
• The stopping condition is i = n+1 and one can see that if it is
satisfied then the final assertion implies the postcondition.
• The termination function is in this case t(p) = n+1-ip where
ip is the value of index i corresponding to iteration p.
• Since ip+1 = ip +1 it follows that t(p+1) < t(p) and t(p) = 0
implies i = n + 1 which implies the stopping condition.