Program Partitioning and Scheduling
• Grain Size and Latency
• Grain Packing and Scheduling
• Static Multiprocessor and Scheduling
Grain Size
• Grain size or granularity
• Is the measure of amount of computation involved a
software process
• Count the number of instructions in a grain
• Grain size- basic program segment chosen for
parallel processing
• Depending on processing levels
• Fine
• Medium
Levels of Parallelism
• Five levels of program execution – different
computational grain sizes and changing
communication and control requirements
• Lower the level ,the finer the granularity of the
software processes
• Execution of a program –involve a combination of
these levels
Levels of Parallelism
• Actual combination depends on
• Algorithm
• Formulation
• Algorithm
• Language
• Program
• Compilation support
• Hardware limitations
Job Level
• Grain size- tens of thousands of instruction in a
single program
• Handled by program loader and by operating
system
• Time sharing or space -sharing multiprocessors –
explore this level of parallelism
• Both time and space sharing are extensions of
multiprogramming
Job Level
• Fine grain parallelism often exploited at instruction or
loop levels
• Medium grain parallelism at task or job step–
demands significant role for programmer & complier
• Coarse grain parallelism at program level – relies on
OS & efficiency of algorithm used
• Shared variable communication is often used to
support fine grain and medium grain computations
Job Level
• Message passing multicomputers –used for
medium grain and coarse grain computers
• Finer the grain size
• Higher the potential for parallelism
• Higher the communication & scheduling
overhead
• Fine grain
• High degree of parallelism
• Heavier communication overhead
Communication Latency
• Communication patterns- determined by
• algorithms used
• Architectural support provided
• Permutations
• Broadcast
• Multicast
• Conference
• Communication demand may limit the granularity or
parallelism
Contd.,.
• Communication issues involves
• Reduction of latency
• Prevention of deadlock
• Minimizing blocking in communication patterns
• Trade off between parallelism and
communication overhead
Grain Packing and Scheduling
• Grain size problem demands the determination of
both the number and size of grains
• Solution –both problem dependent and machine
dependent
• Goal- produce a short schedule for fast execution
of sub divided program modules
• Tradeoff b/w parallelism and scheduling
/synchronization
Grain Packing and Scheduling
• Time complexity involve both computation and
communication overheads
• Program partitioning involves
• Algorithm designer
• Programmer
• Complier
• Operating system support
Contd.,.
• Idea of grain packing – apply fine grain first in order to
achieve high degree of parallelism
• Combines multiple fine grain into coarse grain
• Eliminate unnecessary communication delay or reduce
overall scheduling overhead
• Fine grain operations within a single coarse grain node
are assigned to same processor for execution
• Fine grain partition –demands interprocessor
communication
Contd.,.
• Grain packing –trade off b/w parallelism and
communication/scheduling overhead
• Internal delay –negligible
• Optimal grain size – achieve shortest schedule for
the nodes on a parallel computer system
• Fine grain schedule is longer (42 time units)- more
communication delay
• Coarse grain schedule is shorter (38 units) –
communication within same node
Program decomposition
Program decomposition
Program decomposition
Program decomposition