[go: up one dir, main page]

0% found this document useful (0 votes)
82 views2 pages

HPC 1

The document discusses high performance computing and parallel computing. It covers topics like SIMD and MIMD architectures, memory latency and bandwidth, message passing costs, uniform and non-uniform memory access models, parallel algorithms, data decomposition techniques, task characteristics, dynamic mapping techniques, load balancing techniques, and task dependency graphs.

Uploaded by

itsme68589
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views2 pages

HPC 1

The document discusses high performance computing and parallel computing. It covers topics like SIMD and MIMD architectures, memory latency and bandwidth, message passing costs, uniform and non-uniform memory access models, parallel algorithms, data decomposition techniques, task characteristics, dynamic mapping techniques, load balancing techniques, and task dependency graphs.

Uploaded by

itsme68589
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Total No. of Questions : 4] SEAT No.

8
23
PA-10170 [Total No. of Pages : 2

ic-
[6010]-40

tat
3s
B.E. (Computer Engineering) (Insem)

9:0
HIGH PERFORMANCE COMPUTING

02 91
(2019 Pattern) (Semester - VIII) (410250) (Theory)

9:0
0
30
Time : 1 Hour]
3/0 13 [Max. Marks : 30
0
Instructions to the candidates:
4/2
.23 GP

1) Answer Q.1 or Q.2, Q.3 or Q.4.


2) Neat diagrams must be drawn wherever necessary.
E
80

3) Figures to the right indicate full marks.

8
C

23
4) Assume suitable data, if necessary.

ic-
16

tat
8.2

3s
Q1) a) Explain with suitable diagram SIMD, MIMD architecture. [4]
.24

9:0
91
49

9:0
b) Explain the impact of Memory Latency & Memory Bandwidth on system
30
30

performance. [6]
01
02

c) Explain Message Passing Costs in Parallel Computers in parallel machines.


4/2
GP

[5]
3/0
CE

OR
80

8
23
.23

ic-
16

tat
Q2) a) Describe Uniform-memory-access and Non-uniform-memory-access with
8.2

3s

diagrammatic representation. [6]


.24

9:0
91

b) Describe the scope of parallel computing. Give applications of parallel


49

9:0

computing. [4]
30
30
01
02

c) Write a short note on (Any Two) [5]


4/2
GP

i) Dataflow Models
3/0
CE
80

ii) Demand Driven Computation


.23

iii) Cache Memory


16
8.2
.24

P.T.O.
49
8
Q3) a) Explain any three data decomposition techniques with examples. [6]

23
ic-
b) Explain different characteristics of tasks. [4]

tat
3s
c) Explain classification of Dynamic mapping techniques. [5]

9:0
02 91
9:0
OR

0
30
3/0 13
0
Q4) a) What are mapping techniques for load balancing? Explain at least two
4/2
.23 GP

mapping techniques. [4]


E
80

b) Explain any three parallel algorithm models with suitable examples. [6]

8
C

23
ic-
c) Draw the task-dependency graph for finding the minimum number in the
16

tat
sequence {4, 9, 1, 7, 8, 11, 2, 12} where each node in the tree represents
8.2

3s
the task of finding the minimum of a pair of numbers. Compare this with
.24

9:0
serial version of finding minimum number from an array. [5]
91
49

9:0
30
30

 
01
02
4/2
GP
3/0
CE
80

8
23
.23

ic-
16

tat
8.2

3s
.24

9:0
91
49

9:0
30
30
01
02
4/2
GP
3/0
CE
80
.23
16
8.2
.24

[6010]-40 2
49

You might also like