CC - Unit 1
CC - Unit 1
1. Hardware:
2. Software:
3. Networks:
4. Algorithms:
Serial Computing:
In this approach, only one instruction is processed at a time, and tasks must
wait for the previous ones to complete before being executed.
1. Sequential Execution:
2. Single Processor:
4. Limited Performance:
1. Simplicity:
2. Predictable Behavior:
Parallel Computing:
The classification of parallel computing is based on how tasks and data are
divided and executed across multiple processors.
1. Bit-level parallelism
2. Instruction-level parallelism
3. Task Parallelism
4. Data-level parallelism
5. Pipeline parallelism
1. Bit-level Parallelism:
It is the form of parallel computing which is based on the increasing
processor’s size.
It reduces the number of instructions that the system must execute in
order to perform a task on large-sized data.
Operations on smaller units (bits) performed simultaneously.
For example, there is an 8-bit processor, and you want to do an
operation on 16-bit numbers. First, it must operate the 8 lower-order
bits and then the 8 higher-order bits. Therefore, two instructions are
needed to execute the operation. The operation can be performed with
one instruction by a 16-bit processor.
2. Instruction-level parallelism:
Parallelism achieved by executing multiple instructions simultaneously
within a single CPU.
Overlapping low-level operations.
3. Task Parallelism:
Tasks or threads are divided among processors, where each processor
performs a different task.
Different processors handle separate tasks that may be dependent or
independent but work collaboratively towards the same goal.
Example: In a web server, one thread handles incoming HTTP requests while
another processes database queries.
4. Data-level parallelism:
The same operation is performed on multiple data elements
simultaneously.
Large datasets are divided into smaller chunks, and each processor
works on a chunk.
Example: Applying a filter to an image where each pixel is processed in
parallel.
Distributed Computing:
It's like having a team of individuals each contributing their skills to complete
a complex project.
Distributed computing is the method of making multiple computers work together to solve a
common problem.
Types of Distributed Computing:
Example:
Use Cases:
Use Cases:
Banking systems.
E-commerce inventory systems.
5.Cloud Computing:
Example:
Use Cases:
Simultaneous execution of
Collaboration of multiple
multiple tasks on multiple
Definition autonomous systems to
processors within a single
achieve a common goal.
system.
Distributed memory
Shared memory architecture
architecture (each node
Architecture (processors share memory
has its own memory and
and resources).
resources).
2. Architecture
Synchronization is lower
High synchronization is
but can involve
Synchronization required between threads or
coordination between
processes.
systems.
4. Scalability
5. Fault Tolerance
- Matrix computations,
- Cloud computing, web services,
simulations, image
distributed databases.
Applications processing.
- Big data analytics, content
- Weather modeling,
delivery networks (CDNs).
molecular simulations.
Illustrative Example
Parallel computing involves dividing a task into smaller parts and executing
them simultaneously on multiple processors to solve problems faster.
1. Tasks
2. Processes
3. Threads
4. Communication
5. Synchronization
6. Scalability
7. Speed and Efficiency
8. Load Balancing
9. Dependencies
1. Task:
A task is a unit of work or computation that can be executed
independently.
Tasks can be fine-grained (small and numerous) or coarse-grained (larger
and fewer).
They may be independent or dependent on other tasks.
Example:
3. Threads:
A thread is the smallest unit of execution within a process, sharing the
process's memory space.
Threads are used for lightweight, fine-grained parallelism within a single
process.
Example:
One thread handles user interactions, another renders the web page, and
a third downloads resources—all simultaneously.
4. Communication:
The exchange of information between tasks or processes, often necessary
when tasks are interdependent.
Types:
Example:
Example:
6. Scalability:
The ability of a parallel computing system to handle increasing workloads
by adding more processors or resources.
Types:
Example:
8. Load Balancing:
Distributing tasks evenly among processors to ensure no processor is
idle or overloaded.
Ensures maximum resource utilization and minimizes execution time.
Example:
9. Dependencies:
Some tasks depend on the results of other tasks and cannot start until
those tasks are complete.
Types:
Example:
You can’t cook rice until you’ve boiled the water (data dependency).
4.Elements of Distributed Computing
1. Nodes
2. Processes
3. Communication
4. Coordination
5. Synchronization
6. Scalability
7. Fault Tolerance
8. Transparency
9. Resource Sharing
10. Concurrency
11. Latency
12. Security
1. Nodes
Example:
Example:
3. Communication
Example:
4. Coordination
Ensures that tasks across nodes are properly organized and executed
in the right sequence.
Role: Maintains order and consistency in a distributed system.
Example:
5. Synchronization
Example:
6. Scalability
Example:
7. Fault Tolerance
8. Transparency
Example:
9. Resource Sharing
Example:
Example:
11. Latency
Example:
12. Security
Example:
1. Services:
2. Loose Coupling:
3. Interoperability:
4. Reusability:
5. Discoverability:
1. Service Provider:
2. Service Registry:
3. Service Consumer:
4. Middleware:
1. Service Provider:
2. Service Registry:
3. Service Consumer:
4. Middleware:
1. Flexibility:
2. Scalability:
3. Cost Efficiency:
4. Interoperability:
5. Faster Development:
1. E-Commerce:
2. Healthcare:
3. Banking: