[go: up one dir, main page]

0% found this document useful (0 votes)
31 views7 pages

Daas Solved 2019

Daa solved paper

Uploaded by

Abhishek Agrawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

Daas Solved 2019

Daa solved paper

Uploaded by

Abhishek Agrawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

SPPU BE

Computer Engineering
2019 Pattern (Insem)

Design & Analysis of algorithm

Importance of Correctness in an Algorithm and Loop Invariant Property

1. Importance of Correctness:

- Correctness ensures that the algorithm provides the right solution for every possible input.
- Without correctness, the algorithm might produce incorrect results, leading to errors in the system.

- Correct algorithms are reliable and can be trusted to perform the task they are designed for.

2. Loop Invariant Property:


- A loop invariant is a condition that holds true before and after each iteration of a loop.

- It helps in proving that a loop is correctly implemented.

- By ensuring the loop invariant holds, we can prove that the algorithm is functioning as expected.
3. Example of Summing `n` Numbers Using Loop Invariant:

- Algorithm: Sum the first `n` natural numbers.

- Invariant: After each iteration, the sum so far is correct.

- Proof:
- Initialization: Before the loop, the sum is 0, which is correct for 0 numbers.

- Maintenance: If the sum is correct before an iteration, adding the next number keeps it correct.

- Termination: When the loop ends, the sum is correct for all `n` numbers.

Iterative Algorithm and Design Issues

1. Iterative Algorithm:

- An iterative algorithm repeats a set of instructions using loops until a condition is met.

- Example: Calculating the factorial of a number by multiplying all numbers from 1 to `n`.
2. Design Issues in Iterative Algorithms:

- Termination Condition: Must be clear when the loop should stop to avoid infinite loops.

- Efficiency: The algorithm should not perform unnecessary iterations to avoid wasting resources.
- Memory Usage:The algorithm should manage memory efficiently, especially in large loops.
- Correctness: The loop must maintain correctness throughout its execution by preserving loop invariants.

Proving Algorithm Correctness and Using Counter Examples

1. Proving Correctness:
- Prove that the algorithm works for all inputs by using methods like mathematical induction or loop
invariants.

- Start with a base case, then prove the algorithm works for the next step.
2. Using Counter Example:

- A counterexample is a specific case where the algorithm fails, proving that it is incorrect.
- Example: If a sorting algorithm doesn’t sort an unsorted array correctly, that unsorted array is a
counterexample.

Problem-Solving Strategies

1. Divide and Conquer:


- Break the problem into smaller, manageable subproblems.

- Solve each subproblem individually and combine their solutions.

- Example: Merge sort.


2. Dynamic Programming:

- Solve problems by breaking them down into simpler subproblems and storing the results.

- Avoids redundant calculations.


- Example: Fibonacci sequence calculation.

3. Greedy Algorithm:

- Make the best possible choice at each step with the hope of finding a global optimum.
- Example: Coin change problem.

4. Backtracking:
- Incrementally build solutions and backtrack when a solution fails to meet constraints.
- Example: Maze solving by trying different paths and backtracking when a dead end is reached.
What is Best, Average and Worst case Analysis of Algorithms? Analyse

the following algorithm Best, Average and Worst case [8]

void sort (int a. int n) {


int i, j;

for (i = 0; i < n; i++) {

j = i-1;
key = a[i];

while (j >=0 && a[j] > key)

a[j+1] = a[j];
j = j-1;

a[j+1] = key;
}

Best, Average, and Worst Case Analysis of Algorithms

1. Introduction to Cases:
- Best Case: The minimum time an algorithm takes to complete. It occurs when the input is the most
favorable.
- Average Case: The expected time for the algorithm to complete, considering all possible inputs.

- Worst Case: The maximum time an algorithm takes. It occurs when the input is the least favorable.

2. Analyzing the Provided Algorithm:

- The given algorithm is an **Insertion Sort.

3. Best Case Analysis:

- When it occurs: The array is already sorted.


- Time Complexity: \(O(n)\).
- Explanation: The inner while loop never executes because each element is already in its correct position.
The outer loop runs \(n\) times, but no swaps are needed.
4. Average Case Analysis:

- When it occurs: The array elements are in random order.


- Time Complexity: \(O(n^2)\).
- Explanation: On average, each element will need to be compared with half of the already sorted
elements. The inner while loop runs about \(n/2\) times for each iteration of the outer loop.

5. Worst Case Analysis:


- When it occurs: The array is sorted in reverse order.

- Time Complexity: \(O(n^2)\).


- Explanation: The inner while loop runs the maximum number of times for each iteration of the outer
loop. Every element needs to be compared with every previous element, leading to quadratic time
complexity.

P, NP, NP-Hard, and NP-Complete Problems

1. P (Polynomial Time):
- Definition: Problems that can be solved by an algorithm in polynomial time, meaning the time it takes to
solve the problem grows at a reasonable rate as the size of the input increases.
- Example: Sorting a list of numbers using algorithms like merge sort or quicksort.

2. NP (Nondeterministic Polynomial Time):


- Definition: Problems for which a solution can be verified in polynomial time, even if finding the solution
might take longer.
- Example: Given a solution to the traveling salesman problem (TSP), where a salesman must visit a set of
cities and return to the starting point, you can check if the total distance is within a given limit quickly, but
finding that solution is hard.

3. NP-Hard:
- Definition: Problems as hard as the hardest problems in NP. Solving an NP-hard problem means that all
NP problems can be solved in polynomial time.
- Example: The traveling salesman problem itself, as finding the shortest path is very challenging.
4. NP-Complete:
- Definition: Problems that are both in NP and NP-hard. If you can solve one NP-complete problem in
polynomial time, you can solve all NP problems in polynomial time.
- Example: The subset sum problem, where you determine if there’s a subset of numbers that adds up to
a given sum.

3-SAT Problem
1. Definition:
- A specific case of the Boolean satisfiability problem (SAT), where you have a Boolean expression in
Conjunctive Normal Form (CNF) with exactly three literals (variables or their negations) in each clause.
- Example: \((x_1 \vee \neg x_2 \vee x_3) \wedge (\neg x_1 \vee x_2 \vee \neg x_3)\), where you want to
determine if there's a way to assign true/false values to the variables that makes the entire expression true.

2. Importance in Theoretical Computer Science:


- SAT, and particularly 3-SAT, is important because it was the first problem to be proven NP-complete
(Cook-Levin theorem). This means that if you can solve 3-SAT efficiently, you can solve all NP problems
efficiently.

- It serves as a foundation for proving the complexity of other problems by reducing them to SAT .

Q4)
a) What is NP-complete class problem? How would you prove the vertex cover problem is NP-complete
class problem?

NP-complete Class Problem:


- NP (Nondeterministic Polynomial time): These are decision problems for which a proposed solution can
be verified as correct or incorrect in polynomial time (i.e., the time it takes to verify the solution is a
polynomial function of the input size).
- NP-complete: These are the hardest problems within NP. If you can solve any NP-complete problem in
polynomial time, you can solve all NP problems in polynomial time. Essentially, NP-complete problems are
both in NP and as hard as any problem in NP.

Proving Vertex Cover is NP-complete:


1. Vertex Cover Problem Definition: Given a graph \( G \) with vertices and edges, a vertex cover is a set of
vertices such that every edge in the graph is connected to at least one vertex in this set. The problem is to
find the smallest possible vertex cover.
2. Proof Steps:

- Step 1: Show Vertex Cover is in NP.


- Given a set of vertices, we can check in polynomial time if it's a vertex cover by ensuring every edge has
at least one endpoint in this set.

- Step 2: Show Vertex Cover is NP-hard.


- To do this, we reduce a known NP-complete problem to the vertex cover problem. A common
reduction is from the 3-SAT problem or the Clique problem, both of which are NP-complete.
- If we can convert an instance of the known NP-complete problem to an instance of the vertex cover
problem in polynomial time, and solving the vertex cover problem solves the original problem, then the
vertex cover problem is NP-hard.

- Step 3: Conclusion:

- Since vertex cover is in NP and NP-hard, it is NP-complete.

Q4)
b) What is Best, Average, and Worst case Analysis of Algorithms? Analyze the following algorithm Best,
Average, and Worst case:

Best, Average, and Worst Case Analysis of Algorithms:


- Best Case: The minimum time an algorithm takes to complete. It’s the scenario where the algorithm
performs the least number of operations. For example, in a search algorithm, the best case is when the
item to be found is at the first position.
- Average Case: The expected time an algorithm takes, averaged over all possible inputs. This gives a
general idea of the algorithm's efficiency in typical use cases.
- Worst Case: The maximum time an algorithm can take to complete. It’s the scenario where the algorithm
performs the most operations, often used for ensuring that even the slowest performance is acceptable.

Analysis of Given Algorithm (Linear Search):

```java
int Linear-search(int a[], int n, int item) {

int i;

for (i = 0; i < n; i++) {


if (a[i] == item) {

return a[i];

}
}

return -1;

}
```

- Best Case:

- The item is the first element in the array.


- Time Complexity: \( O(1) \) because the algorithm finds the item in the first iteration.

- Average Case:

- The item is in the middle of the array or the item is not present.
- Time Complexity: \( O(n) \) on average, where \( n \) is the number of elements in the array.

- Worst Case:
- The item is the last element in the array or not present at all.

- Time Complexity: \( O(n) \) because the algorithm has to check every element in the array.

In this way, the time complexity of the linear search is \( O(n) \) in both the average and worst cases, and
\( O(1) \) in the best case.

You might also like