[go: up one dir, main page]

0% found this document useful (0 votes)
2 views74 pages

31 L31 - Review 1- 14

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 74

Algorithms and Data

Structures

(CSC112)

1
Review
• Introduction to Algorithms and Data
Structures
• Static Data Structures
• Searching Algorithms
• Sorting Algorithms
• List implementation through Array
• ADT: Stack
• ADT: Queue
• Dynamic Data Structures (Linear)
• Linked List (Linear Data Structure)
• Dynamic Data Structures (Non-Linear)
• Trees, Heap, Hashing, Graphs

2
Algorithm Analysis
Problem Solving
Space Complexity
Time Complexity
Classifying Functions by Their
Asymptotic Growth

3
Problem Solving: Main Steps
1. Problem definition
2. Algorithm design / Algorithm
specification
3. Algorithm analysis
4. Implementation
5. Testing
6. Maintenance

4
1. Problem Definition
What is the task to be accomplished?
Calculate the average of the grades
for a given student
Find the largest number in a list

What are the time /space performance


requirements ?

5
2. Algorithm
Design/Specifications
Algorithm: Finite set of instructions that,
if followed, accomplishes a particular
task.
Describe: in natural language / pseudo-
code / diagrams / etc.
Criteria to follow:
Input: Zero or more quantities (externally
produced)
Output: One or more quantities
Definiteness: Clarity, precision of each
instruction
Effectiveness: Each instruction has to be
basic enough and feasible
Finiteness: The algorithm has to stop after a
6
finite (may be very large) number of steps
4,5,6: Implementation, Testing and
Maintenance

Implementation
Decide on the programming language
to use
C, C++, Python, Java, Perl, etc.
Write clean, well documented code

Test, test, test

Integrate feedback from users, fix


bugs, ensure compatibility across
7 different versions  Maintenance
3. Algorithm Analysis
Space complexity
How much space is required
Time complexity
How much time does it take to run the
algorithm

8
Space Complexity
Space complexity = The amount of
memory required by an algorithm to run
to completion
the most often encountered cause is
“memory leaks” – the amount of memory
required larger than the memory available on
a given system
Some algorithms may be more efficient
if data completely loaded into memory
Need to look also at system limitations
e.g. Classify 2GB of text in various categories
– can I afford to load the entire collection?
9
Space Complexity (cont…)
1. Fixed part: The size required to store
certain data/variables, that is
independent of the size of the problem:
- e.g. name of the data collection

2. Variable part: Space needed by


variables, whose size is dependent on
the size of the problem:
- e.g. actual text
- load 2GB of text VS. load 1MB of text

1
0
Time Complexity
Often more important than space
complexity
space available tends to be larger and larger
time is still a problem for all of us

3-4GHz processors on the market


still …
researchers estimate that the computation of
various transformations for 1 single DNA chain
for one single protein on 1 TerraHZ computer
would take about 1 year to run to completion
Algorithms running time is an important
issue
1
1
Pseudo Code and Flow Charts
Pseudo Code
Basic elements of Pseudo code
Basic operations of Pseudo code
Flow Chart
Symbols used in flow charts
Examples

1
2
Pseudo Code and Flow Charts
There are two commonly used tools to help
to document program logic (the algorithm).
These are
Flowcharts
Pseudocode.
Generally, flowcharts work well for small
problems but Pseudocode is used for larger
problems.

1
3
Pseudo-Code
Pseudo-Code is simply a numbered list of
instructions to perform some task.

1
4
Writing Pseudo Code
Number each instruction
 This is to enforce the notion of an
ordered sequence of operations
 Furthermore we introduce a dot
notation (e.g. 3.1 come after 3 but
before 4) to number subordinate
operations for conditional and
iterative operations
Each instruction should be unambiguous and
effective.
1 Completeness. Nothing is left out.
5
Pseudo-code
Statements are written in simple English without
regard to the final programming language.
Each instruction is written on a separate line.
The pseudo-code is the program-like statements
written for human readers, not for computers.
Thus, the pseudo-code should be readable by
anyone who has done a little programming.
Implementation is to translate the pseudo-code
into programs/software, such as “C++” language
programs.

1
6
Basic Elements of Pseudo-
code
A Variable
 Having name and value
 There are two operations performed on a
variable
 Assignment Operation is the one in
which we associate a value to a variable.
 The other operation is the one in which at
any given time we intend to retrieve the
value previously assigned to that variable
(Read Operation)

1
7
Basic Elements of Pseudo-
code
Assignment Operation
This operation associates a value to
a variable.
While writing Pseudo-code you may
follow your own syntax.
 Some of the possible syntaxes are:
Assign 3 to x
Set x equal to 3
x=3

1
8
Basic Operations of Pseudo-
code
Read Operation
In this operation we intend to
retrieve the value previously
assigned to that variable. For
example Set Value of x equal to y
Read the input from user
This operation causes the algorithm
to get the value of a variable from
the user.
Get x Get a, b, c
1
9
Flow Chart
Some of the
common symbols
used in flowcharts
are shown.
…

20

With flowcharting, essential steps of an
algorithm are shown using the shapes
above.
 The flow of data between steps is
indicated by arrows, or flowlines. For
example, a flowchart (and equivalent
Pseudocode) to compute the interest on a
loan is shown below:

2
1
2
2
List
List Data Structure
List operations
List Implementation
Array
Linked List

2
3
The LIST Data Structure
 The List is among the most generic of data
structures.

 Real life:

a. shopping list,
b. groceries list,
c. list of people to invite to dinner
d. List of presents to get

2
4
Lists
 A list is collection of items that are all of the
same type (grocery items, integers, names)

 The items, or elements of the list, are stored in


some particular order

 It is possible to insert new elements into


various positions in the list and remove any
element of the list

2
5
Lists
 List is a set of elements in a linear order.
For example, data values a1, a2, a3, a4 can be
arranged in a list:

(a3, a1, a2, a4)

In this list, a3, is the first element, a1 is the


second element, and so on

 The order is important here; this is not just a


random collection of elements, it is an ordered
collection
2
6
Lists
 List is a set of elements in a linear order.
For example, data values a1, a2, a3, a4 can be
arranged in a list:

(a3, a1, a2, a4)

In this list, a3, is the first element, a1 is the


second element, and so on

 The order is important here; this is not just a


random collection of elements, it is an ordered
collection
2
7
List Operations
Useful operations
 createList(): create a new list (presumably empty)
 copy(): set one list to be a copy of another
 clear(); clear a list (remove all elments)
 insert(X, ?): Insert element X at a particular position
in the list
 remove(?): Remove element at some position in
the list
 get(?): Get element at a given position
 update(X, ?): replace the element at a given position
with X
 find(X): determine if the element X is in the list
 length(): return the length of the list.

2
8
Pointer
Pointer
Pointer Variables
Dynamic Memory Allocation
Functions

2
9
What is a Pointer?
A Pointer provides a way of
accessing a variable without
referring to the variable directly.
The mechanism used for this
purpose is the address of the
variable.
A variable that stores the address
of another variable is called a
pointer variable.

3
0
Pointer Variables
Pointer variable: A variable that holds an
address
Can perform some tasks more easily
with an address than by accessing
memory via a symbolic name:
Accessing unnamed memory locations
Array manipulation
etc.

3
1
Why Use Pointers?
To operate on data stored in an array
To enable convenient access within a
function to large blocks data, such as
arrays, that are defined outside the
function.
To allocate space for new variables
dynamically–that is during program
execution

3
2
Pointer Data Types and Pointer Variables

Pointer variable: variable whose content


is a memory address
Syntax to declare pointer variable:
dataType *identifier;
Address of operator: Ampersand, &
Dereferencing operator/Indirection
operator: Asterisk, *

3
3
The Address-Of Operator (&)
The address-of operator, &, is a
unary operator that obtains the
address in memory where a
variable is stored.
int number = 1234;
int*pnumber= &number; //stores
address of //number in pnumber
char a = „a‟;
char *pa = &a;//stores address of a
in pa.
3
4
The Indirection Operator
How pointer variable is used to access
the contents of memory location?
The indirection operator, *is used for this
purpose.

cout<< *pnumber;

3
5
Arrays & Strings
Array
Array Elements
Accessing array elements
Declaring an array
Initializing an array
Two-dimensional Array
Array of Structure
String
Array of Strings
Examples

3
6
Introduction
Arrays
Contain fixed number of elements of same
data type
Static entity- same size throughout the
program
An array must be defined before it is used
An array definition specifies a variable type,
a name and size
Size specifies how many data items the array
will contain
An example

3
7
Array Elements
The items in an array are called elements
All the elements are of the same type
The first array element is numbered 0
Four elements (0-3) are stored
consecutively in
the memory

3
8
Accessing array elements
To access an element specify array name
and array index number
Syntax:
arrayname[ arrayindex]
to access 3rd element of array age we use
age[2] age is the name of the array and 2
is the index number

3
9
Declaring an array
when declaring an array, specify
type of array
array name
number of elements
 arrayType arrayName [ numberofElements ]
examples
 int results [10 ];
 float myArray [50];

declaring multiple arrays of same type


format similar to regular variables
example: int b[5], x[10];

4
0
Initializing an array
when an array is defined then you can give
values to each array elements
example:
int n[5]={1,2,3,4,5};

4
1
Strings
two types of strings are used in C++
C-Strings and strings that are object of the
String class
we will study C-Strings only
C-Strings or C-Style String

4
2
an example

#include <iostream>
int main()
{
const int MAX = 80; //max characters in
string
char str[MAX];
cout << “Enter a string: “;
cin >> str; //put string in str
//display string from str
cout << “You entered: “ << str << endl;
return 0;
4
}
3
the definition of the string variable looks
like the definition of an array of type char
char str[MAX];
we read a string from the keyboard and
place it in the string variable str
the extraction operator >> knows how to
deal with strings
if the user enters a string “Amanuensis”, it
will look like

4
4
4
5
each character occupies 1 byte of memory
C-Strings must terminate with a byte
containing 0
it is often represented by a character
constant “\0” which is character with an
ASCII value of 0
this terminating zero is called “Null
character”
when the operator << displays a string, it
displays characters until it encounters the
null character

4
6
Searching
Introduction to Searching
External and Internal Searching
Types of Searching
Linear or sequential search
Binary Search
Algorithms for Linear Search
Algorithms for Binary Search

4
7
Introduction
Information retrieval is one of the most
important applications of computers
We are given a name and are asked for an
associated telephone listing
We are given an account number and are
asked for the transactions occurring in that
account
We are given an employee name or number
and are asked for the employee records

4
8
Searching is a process of checking and
finding an element from a list of elements
If we want to find the presence of an
element “data” in A, then we have to
search for it
The search is successful if data does
appear in A and unsuccessful otherwise
 A question you should always ask when
selecting a search algorithm is “How fast does
the search have to be?” The reason is that, in
general, the faster the algorithm is, the more
complex it is.
 Bottom line: you don’t always need to use or
4
9 should use the fastest algorithm
Key
In these examples, we are given one piece
of information, which we shall call a key
We are asked to find a record that contains
other information associated with the key
We shall allow both the possibility that
there is more than one record with the
same key and that there is no record with
the given key

5
0
Records and their keys

5
1
Types of Searching
There are two types of searching
techniques:
Linear or Sequential Searching
Binary Searching

Linear or Sequential Searching


In linear search, each element of an array
is read one by one sequentially
It is compared with the desired element
A search will be unsuccessful if all the
elements are read and the desired element
is not found

5
2
Binary Search
Sequential search is easy to write and efficient
for short lists, but a disaster for long ones
Imagine trying to find the name “Ahmad Tahir”
in a large telephone book by reading one name
at a time starting at the front of the book
To find any entry in a long list, there are far
more efficient methods, provided that the keys
in the list are already sorted into order
If we have an ordered list, we can use a
different strategy
The binary search gets its name because the
algorithm continually divides the list into two
parts
5
3
Binary Search
Binary search is an extremely efficient
algorithm when it is compared to linear search
Binary search technique searches “data” in
minimum possible comparisons
First compare the target key with one in the
center of the list and then restrict our attention
to only the first or second half of the list,
depending on whether the target key comes
before or after the central one
With one comparison of keys we thus reduce
the list to half its original size
Continuing in this way, at each step, we reduce
the length of the list to be searched by half
5
4
Recursion
Introduction to Recursion
Recursive Definition
Recursive Algorithms
Finding a Recursive Solution
Example Recursive Function
Recursive Programming
Rules for Recursive Function
Example Tower of Hanoi
Other examples

5
5
Introduction
 Any function can call another function
 A function can even call itself
 When a function call itself, it is making a
recursive call
 Recursive Call
 A function call in which the function being called
is the same as the one making the call
 Recursion is a powerful technique that can be
used in place of iteration(looping)
 Recursion
 Recursion is a programming technique in which
functions call themselves.

5
6
Recursive Definition
 A definition in which something is defined
in terms of smaller versions of itself.
 To do recursion we should know the followings
Base Case:
The case for which the solution can be
stated non-recursively
The case for which the answer is explicitly
known.
General Case:
The case for which the solution is
expressed in smaller version of itself. Also
known as recursive case

5
7
Recursive Algorithm
Definition
An algorithm that calls itself
Approach
Solve small problem directly
Simplify large problem into 1 or more smaller
sub problem(s) & solve recursively
Calculate solution from solution(s) for sub
problem

5
8
Rules For Recursive Function
1. In recursion, it is essential for a function to call
itself, otherwise recursion will not take place.
2. To stop the recursive function it is necessary to
base the recursion on test condition and proper
terminating statement such as exit() or return
must be written using if() statement.
3. When a recursive function is executed, the
recursive calls are not implemented instantly. All
the recursive calls are pushed onto the stack
until the terminating condition is not detected,
the recursive calls stored in the stack are popped
and executed.
4. During recursion, at each recursive call new
5 memory is allocated to all the local variables of
9
Sorting Algorithms
There are many sorting algorithms, such
as:
Selection Sort
Insertion Sort
Bubble Sort
Merge Sort
Quick Sort

The first three are the foundations for


faster and more efficient algorithms.
Sorting
 Sorting is a process that organizes a collection of data into
either ascending or descending order.
 An internal sort requires that the collection of data fit entirely
in the computer’s main memory.
 We can use an external sort when the collection of data
cannot fit in the computer’s main memory all at once but must
reside in secondary storage such as on a disk.
 We will analyze only internal sorting algorithms.
 Any significant amount of computer output is generally
arranged in some sorted order so that it can be interpreted.
 Sorting also has indirect uses. An initial sort of the data can
significantly enhance the performance of an algorithm.
 Majority of programming projects use a sort somewhere, and in
many cases, the sorting cost determines the running time.
 A comparison-based sorting algorithm makes ordering
decisions only on the basis of comparisons.
Selection Sort
 The list is divided into two sublists, sorted and
unsorted, which are divided by an imaginary
wall.
 We find the smallest element from the unsorted
sublist and swap it with the element at the
beginning of the unsorted data.
 After each selection and swapping, the
imaginary wall between the two sublists move
one element ahead, increasing the number of
sorted elements and decreasing the number of
unsorted ones.
 Each time we move one element from the
unsorted sublist to the sorted sublist, we say
that we have completed a sort pass.
 A list of n elements requires n-1 passes to
completely rearrange the data.
Sorted Unsorted

23 78 45 8 32 56 Original List

8 78 45 23 32 56 After pass 1

8 23 45 78 32 56 After pass 2

8 23 32 78 45 56 After pass 3

8 23 32 45 78 56
After pass 4

8 23 32 45 56 78 After pass 5
Insertion Sort
Insertion sort is a simple sorting
algorithm that is appropriate for small
inputs.
Most common sorting technique used by card
players.
The list is divided into two parts: sorted
and unsorted.
In each pass, the first element of the
unsorted part is picked up, transferred
to the sorted sublist, and inserted at the
appropriate place.
A list of n elements will take at most n-1
passes to sort the data.
Sorted Unsorted

23 78 45 8 32 56 Original List

23 78 45 8 32 56 After pass 1

23 45 78 8 32 56 After pass 2

8 23 45 78 32 56 After pass 3

8 23 32 45 78 56
After pass 4

8 23 32 45 56 78 After pass 5
Bubble Sort
The list is divided into two sublists:
sorted and unsorted.
The smallest element is bubbled from
the unsorted list and moved to the
sorted sublist.
After that, the wall moves one element
ahead, increasing the number of sorted
elements and decreasing the number of
unsorted ones.
Each time an element moves from the
unsorted part to the sorted part one sort
pass is completed.
Given a list of n elements, bubble sort
requires up to n-1 passes to sort the
Bubble Sort
23 78 45 8 32 56 Original List

8 23 78 45 32 56 After pass 1

8 23 32 78 45 56 After pass 2

8 23 32 45 78 56 After pass 3

8 23 32 45 56 78
After pass 4
Mergesort
 Mergesort algorithm is one of two important divide-
and-conquer sorting algorithms (the other one is
quicksort).
 It is a recursive algorithm.
Divides the list into halves,
Sort each halve separately, and
Then merge the sorted halves into one sorted
array.
Mergesort - Example
Mergesort - Example

6 3 9 1 5 4 7 2
divide
6 3 9 1 5 4 7 2
divide divide
6 3 9 1 5 4 7 2

divide divide divide divide


6 3 9 1 5 4 7 2
merge merge merge merge

3 6 1 9 4 5 2 7
merge merge

1 3 6 9 merge
2 4 5 7

1 2 3 4 5 7 8 9
Quicksort
 Like mergesort, Quicksort is also based on
the divide-and-conquer paradigm.
 But it uses this technique in a somewhat
opposite manner,
as all the hard work is done before the
recursive calls.
 It works as follows:
1. First, it partitions an array into two parts,
2. Then, it sorts the parts independently,
3. Finally, it combines the sorted
subsequences by
a simple concatenation.
Quicksort (cont.)
The quick-sort algorithm consists of the following
three steps:

1. Divide: Partition the list.


To partition the list, we first choose some element
from the list for which we hope about half the
elements will come before and half after. Call this
element the pivot.
Then we partition the elements so that all those
with values less than the pivot come in one sublist
and all those with greater values come in another.
2. Recursion: Recursively sort the sublists
separately.
Partition
 Partitioning places the pivot in its correct place position within the
array.

 Arranging the array elements around the pivot p generates two


smaller sorting problems.
sort the left section of the array, and sort the right
section of the array.
when these two smaller sorting problems are solved
recursively, our bigger sorting problem is solved.
Comparison of Sorting
Algorithms

You might also like