[go: up one dir, main page]

0% found this document useful (0 votes)
161 views76 pages

Chapter 06

Computer Science and Hardware

Uploaded by

Charly Elvira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
161 views76 pages

Chapter 06

Computer Science and Hardware

Uploaded by

Charly Elvira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 76

COMPUTER ORGANIZATION AND

5 D
Edition
th

The Hardware/Software Interface

Chapter 6
Parallel Processors from
Client to Cloud

Goal: connecting multiple computers


to get higher performance

High throughput for independent jobs

Parallel processing program

Multiprocessors
Scalability, availability, power efficiency

Task-level (process-level) parallelism

6.1 Introduction

Introduction

Single program run on multiple processors

Multicore microprocessors

Chips with multiple processors (cores)


Chapter 6 Parallel Processors from Client to Cloud 2

Hardware and Software

Hardware

Software

Serial: e.g., Pentium 4


Parallel: e.g., quad-core Xeon e5345
Sequential: e.g., matrix multiplication
Concurrent: e.g., operating system

Sequential/concurrent software can run on


serial/parallel hardware

Challenge: making effective use of parallel


hardware
Chapter 6 Parallel Processors from Client to Cloud 3

What Weve Already Covered

2.11: Parallelism and Instructions

3.6: Parallelism and Computer Arithmetic

Synchronization
Subword Parallelism

4.10: Parallelism and Advanced


Instruction-Level Parallelism
5.10: Parallelism and Memory
Hierarchies

Cache Coherence

Chapter 6 Parallel Processors from Client to Cloud 4

Parallel software is the problem


Need to get significant performance
improvement

Otherwise, just use a faster uniprocessor,


since its easier!

Difficulties

Partitioning
Coordination
Communications overhead

6.2 The Difficulty of Creating Parallel Processing Programs

Parallel Programming

Chapter 6 Parallel Processors from Client to Cloud 5

Amdahls Law

Sequential part can limit speedup


Example: 100 processors, 90 speedup?

Tnew = Tparallelizable/100 + Tsequential

1
Speedup
90
(1 Fparallelizable ) Fparallelizable /100

Solving: Fparallelizable = 0.999

Need sequential part to be 0.1% of original


time
Chapter 6 Parallel Processors from Client to Cloud 6

Scaling Example

Workload: sum of 10 scalars, and 10 10 matrix


sum

Single processor: Time = (10 + 100) tadd


10 processors

Time = 10 tadd + 100/10 tadd = 20 tadd


Speedup = 110/20 = 5.5 (55% of potential)

100 processors

Speed up from 10 to 100 processors

Time = 10 tadd + 100/100 tadd = 11 tadd


Speedup = 110/11 = 10 (10% of potential)

Assumes load can be balanced across


processors
Chapter 6 Parallel Processors from Client to Cloud 7

Scaling Example (cont)

What if matrix size is 100 100?


Single processor: Time = (10 + 10000) tadd
10 processors

Time = 10 tadd + 10000/10 tadd = 1010 tadd

Speedup = 10010/1010 = 9.9 (99% of potential)

100 processors

Time = 10 tadd + 10000/100 tadd = 110 tadd

Speedup = 10010/110 = 91 (91% of potential)

Assuming load balanced

Chapter 6 Parallel Processors from Client to Cloud 8

Strong vs Weak Scaling

Strong scaling: problem size fixed

As in previous example

Weak scaling: problem size proportional to


number of processors

10 processors, 10 10 matrix

100 processors, 32 32 matrix

Time = 20 tadd
Time = 10 tadd + 1000/100 tadd = 20 tadd

Constant performance in this example


Chapter 6 Parallel Processors from Client to Cloud 9

An alternate classification: parallel system

Data Streams
Single
Instruction Single
Streams
Multiple

Multiple

SISD:
Intel Pentium 4

SIMD: SSE
instructions of x86

MISD:
No examples today

MIMD:
Intel Xeon e5345

SPMD: Single Program Multiple Data

6.3 SISD, MIMD, SIMD, SPMD, and Vector

Instruction and Data Streams

A parallel program on a MIMD computer


Conditional code for different processors
Chapter 6 Parallel Processors from Client to Cloud 10

Vector Processors

Highly pipelined function units


Stream data from/to vector registers (with multiple
elements in a vector register) to units

Data collected from memory into registers


Results stored from registers to memory

Example: Vector extension to MIPS

32 64-element registers (64-bit elements)


Vector instructions

lv, sv: load/store to /from vector registers


addv.d: add vectors of double
addvs.d: add scalar to each element of vector of double

Significantly reduces instruction-fetch bandwidth


Chapter 6 Parallel Processors from Client to Cloud 11

Example: DAXPY (Y = a X + Y)
Conventional MIPS code
l.d
$f0,a($sp)
addiu r4,$s0,#512
loop: l.d
$f2,0($s0)
mul.d $f2,$f2,$f0
l.d
$f4,0($s1)
add.d $f4,$f4,$f2
s.d
$f4,0($s1)
addiu $s0,$s0,#8
addiu $s1,$s1,#8
subu $t0,r4,$s0
bne
$t0,$zero,loop
Vector MIPS code
l.d
$f0,a($sp)
lv
$v1,0($s0)
mulvs.d $v2,$v1,$f0
lv
$v3,0($s1)
addv.d $v4,$v2,$v3
sv
$v4,0($s1)

;load scalar a
;upper bound of what to load
;load x(i)
;a x(i)
;load y(i)
;a x(i) + y(i)
;store into y(i)
;increment index to x
;increment index to y
;compute bound
;check if done
;load scalar a
;load vector x (load 64 data)
;vector-scalar multiply
;load vector y
;add y to product
;store the result

Chapter 6 Parallel Processors from Client to Cloud 12

Vector vs. Scalar

Vector architectures and compilers

Simplify data-parallel programming


Explicit statement of absence of loop-carried
dependences

Reduced checking in hardware

Regular access patterns benefit from


interleaved and burst memory
Avoid control hazards by avoiding loops

More general than ad-hoc media


extensions (such as MMX, SSE)

Better match with compiler technology


Chapter 6 Parallel Processors from Client to Cloud 13

SIMD

Operate elementwise on vectors of data

E.g., MMX and SSE instructions in x86

All processors execute the same


instruction at the same time

Multiple data elements in 128-bit wide registers

Each with different data address, etc.

Simplifies synchronization
Reduced instruction control hardware
Works best for highly data-parallel
applications, such as media applications
Chapter 6 Parallel Processors from Client to Cloud 14

Media applications operate on data types narrower than


the native word size
Example: disconnect carry chains to partition adder

Limitations, compared to vector instructions:


Number of data operands encoded into op code
No sophisticated addressing modes (strided, scattergather)
Limited parallelism (register width)
No mask registers

Copyright 2012, Elsevier Inc. All rights reserved.

SIMD Instruction Set Extensions for Multimedia

SIMD Extensions

15

Implementations:

Intel MMX (1996)

Streaming SIMD Extensions (SSE) (1999)

Eight 16-bit integer ops


Four 32-bit integer/fp ops or two 64-bit integer/fp ops

Advanced Vector Extensions (2010) (Fig. 4.9)

Eight 8-bit integer ops or four 16-bit integer ops

Four 64-bit integer/fp ops

Operands must be consecutive and aligned memory


locations

Copyright 2012, Elsevier Inc. All rights reserved.

SIMD Instruction Set Extensions for Multimedia

SIMD Implementations

16

Example DAXPY: (4D means 4 double-p. operands)

L.D F0,a
MOV F1, F0
MOV F2, F0
MOV F3, F0
DADDIU
Loop: L.4D
MUL.4D

;load scalar a
;copy a into F1 for SIMD MUL
;copy a into F2 for SIMD MUL
;copy a into F3 for SIMD MUL
R4,Rx,#512
;last address to load
F4,0[Rx] ;load X[i], X[i+1], X[i+2], X[i+3]
F4,F4,F0
;aX[i],aX[i+1],aX[i+2],aX[i+3]
; F4*F0, F5*F1, F6*F2, F7*F3
L.4D F8,0[Ry] ;load Y[i], Y[i+1], Y[i+2], Y[i+3]
ADD.4D
F8,F8,F4
;aX[i]+Y[i], ..., aX[i+3]+Y[i+3]
S.4D 0[Ry],F8 ;store into Y[i], Y[i+1], Y[i+2], Y[i+3]
DADDIU
Rx,Rx,#32
;increment index to X
DADDIU
Ry,Ry,#32
;increment index to Y
DSUBU
R20,R4,Rx
;compute bound
BNEZ
R20,Loop
;check if done

Copyright 2012, Elsevier Inc. All rights reserved.

SIMD Instruction Set Extensions for Multimedia

Example SIMD Code

17

Vector vs. Multimedia Extensions

Vector instructions have a variable vector width,


multimedia extensions have a fixed width
Vector instructions support strided access,
multimedia extensions do not
Vector units can be combination of pipelined and
arrayed functional units:

Chapter 6 Parallel Processors from Client to Cloud 18

Performing multiple threads of execution in


parallel

Fine-grain multithreading

Replicate registers, PC, etc.


Fast switching between threads

6.4 Hardware Multithreading

Multithreading

Switch threads after each cycle


Interleave instruction execution
If one thread stalls, others are executed

Coarse-grain multithreading

Only switch on long stall (e.g., L2-cache miss)


Simplifies hardware, but doesnt hide short stalls
(eg, data hazards)
Chapter 6 Parallel Processors from Client to Cloud 19

Simultaneous Multithreading

In multiple-issue dynamically scheduled


processor

Schedule instructions from multiple threads


Instructions from independent threads execute
when function units are available
Within threads, dependencies handled by
scheduling and register renaming

Example: Intel Pentium-4 HT (Hyperthreading)

Two threads: duplicated registers, shared


function units and caches
Chapter 6 Parallel Processors from Client to Cloud 20

Multithreading Example

Chapter 6 Parallel Processors from Client to Cloud 21

Multi-issued / Multithreaded Categories


Fine-Grained Coarse-Grained

Multiprocessing

Time (processor cycle)

Superscalar

Simultaneous
Multithreading

Single thread

Thread 1
Thread 2

Thread 3
Thread 4

Thread 5
Idle slot
22

Simultaneous Multithreading (SMT)

Simultaneous multithreading (SMT): insight that


dynamically scheduled processor already has many HW
mechanisms to support multithreading

Large set of virtual registers that can be used to hold the


register sets of independent threads
Register renaming provides unique register identifiers, so
instructions from multiple threads can be mixed in datapath
Out-of-order completion allows the threads to execute out of
order, and get better utilization of the HW

Just adding a per thread renaming table and keeping


separate PCs

Independent commitment can be supported by logically


keeping a separate reorder buffer for each thread
Fetch / Issue from multiple thread pre cycle
23

FIGURE 6.6The speed-up from using multithreading on one core on an i7 processor


averages 1.31 for the PARSEC benchmarks (see Section 6.9) and the energy
efficiency improvement is 1.07. This data was collected and analyzed by Esmaeilzadeh
et. al. [2011].
Copyright 2014 Elsevier Inc. All rights reserved.

24

Handling Memory Wall

Processors

Caches

intelligent cache management, non-blocking

Prefetching

Exploit MLP (memory level parallelism)

Prefetch instruction and data before use


Accuracy, timely, polution&bandwidth

Multithreading

Hide memory latency with multiple threads


Chapter 6 Parallel Processors from Client to Cloud 25

SMP: shared memory multiprocessor (multicore)

Hardware provides single physical


address space for all processors
Synchronize shared variables using locks
Memory access time

UMA (uniform) vs. NUMA (nonuniform)

6.5 Multicore and Other Shared Memory Multiprocessors

Shared Memory

Chapter 6 Parallel Processors from Client to Cloud 26

Example: Sum Reduction on Sharememory Parallel Systems

Sum 100,000 numbers on 100 processor UMA

Each processor has ID: 0 Pn 99


Partition 1000 numbers per processor
Initial summation on each processor
sum[Pn] = 0;
for (i = 1000*Pn;
i < 1000*(Pn+1); i = i + 1)
sum[Pn] = sum[Pn] + A[i];

Now need to add these partial sums

Reduction: divide and conquer


Half the processors add pairs, then quarter,
Need to synchronize between reduction steps
Chapter 6 Parallel Processors from Client to Cloud 27

Example: Sum Reduction

half = 100;
Repeat /* parallel */
synch();
if (half%2 != 0 && Pn == 0)
sum[0] = sum[0] + sum[half-1];
/* Conditional sum needed when half is odd;
Processor0 gets missing element */
half = half/2; /* dividing line on who sums */
if (Pn < half) sum[Pn] = sum[Pn] + sum[Pn+half];
until (half == 1);
Chapter 6 Parallel Processors from Client to Cloud 28

Early video cards

3D graphics processing

Frame buffer memory with address generation for


video output
Originally high-end computers (e.g., SGI)
Moores Law lower cost, higher density
3D graphics cards for PCs and game consoles

Graphics Processing Units

6.6 Introduction to Graphics Processing Units

History of GPUs

Processors oriented to 3D graphics tasks


Vertex/pixel processing, shading, texture mapping,
rasterization
Chapter 6 Parallel Processors from Client to Cloud 29

Graphics in the System

Chapter 6 Parallel Processors from Client to Cloud 30

An Example of Physical Reality Behind


CUDA Host Motherboard
CPU
(host)
GPU w/
local DRAM
(device)

31

GPU Architectures

Processing is highly data-parallel

GPUs are highly multithreaded


Use thread switching to hide memory latency

Graphics memory is wide and high-bandwidth

Trend toward general purpose GPUs

Less reliance on multi-level caches

Heterogeneous CPU/GPU systems


CPU for sequential code, GPU for parallel code

Programming languages/APIs

DirectX, OpenGL
C for Graphics (Cg), High Level Shader Language
(HLSL)
Compute Unified Device Architecture (CUDA)
Open CL
Chapter 6 Parallel Processors from Client to Cloud 32

Comparison: GPU vs Multicore CPU

Difference in utilizing on-chip transistors:

CPU has significant cache space and control logic for general-purpose
applications
GPU builds large number of replicated cores for data-parallel, thread-parallel
computations; requires higher DRAM bandwidth

33

33

G80 Graphics Mode


The future of GPUs is programmable processing
So build the architecture around the processor
Host
Input Assembler

Setup / Rstr / ZCull

SP

SP

SP

TF

SP

TF

L1

SP

TF

L1

SP

SP

SP

TF

L1

L1

SP

SP

TF

L1

L2
FB

Pixel Thread Issue

SP

TF

L2
FB

SP

SP

TF

L1

L2
FB

SP

Geom Thread Issue

SP

TF

L1

L2
FB

SP

L1

L2
FB

Thread Processor

Vtx Thread Issue

L2
FB

G80 CUDA mode A Device Example

GPGPU - Processors execute general computing threads programmed


using CUDA
New operating mode/HW interface for computing

Shared the same hardware with


graphics applications

Host
Input Assembler
Thread Execution Manager

Parallel Data
Cache

Parallel Data
Cache

Parallel Data
Cache

Parallel Data
Cache

Parallel Data
Cache

Parallel Data
Cache

Parallel Data
Cache

Parallel Data
Cache

Texture
Texture

Texture

Texture

Texture

Texture

Texture

Texture

Texture

Load/store

Load/store

Load/store

Load/store

Global Memory

Load/store

Load/store

GeForce: GTX serious for Graphics applications


Tesla: for GPGPU, data center applications
Quadro (Tegra): for mobile devices

Graphical Processing Units

Nvidia GPU Family

Architecture generation:

G80
GT200
Feimi
Kepler

Copyright 2012, Elsevier Inc. All rights reserved.

36

Example: NVIDIA Tesla


Streaming
multiprocessor

Note, Fermi generation get rid of TPC level

8 Streaming
processors

Chapter 6 Parallel Processors from Client to Cloud 37

Streaming Multiprocessors

FIGURE 6.9Simplified block diagram of the datapath of a multithreaded


SIMD Processor. It has 16 SIMD lanes. The SIMD Thread Scheduler has many
independent SIMD threads that it chooses from to run on this processor.
Copyright 2014 Elsevier Inc. All rights reserved.

38

CUDA Programming Model

Host invokes Kernels/Grids to execute on GPU, back to Host after execution


Three-level parallelism: Grid (Kernal), Block, Thread

Application

kernel 0

Block 0

Thread

Host execution
Block 1

..
.

Block 2

..
.

..
.

Block 3

..
.

Host execution
kernel 1

Block 0

..
.

Block N

..
.

39

A thread is associated with each data element


Threads are organized into blocks

warps in hardware as a scheduling unit, each block


has multiple warps to be scheduled
Each warp has 32 threads to be executed in parallel
called SIMT (Single Inst Multiple Thread)

Blocks are organized into a grid (Kernel)


GPU hardware handles thread scheduling and
management, not applications or OS
Blocks are scheduled to run on MS (Streaming
Multiprocessor)

Graphical Processing Units

Threads and Blocks

CUDA Programming Model

Host invokes Kernels/Grids to execute on GPU, back to


Host after execution

Three-level parallelism: Grid (Kernal), Block, Thread

Important: GPU programming relies on programs to


optimize the execution including three-level parallelization,
data moves, register and memory usage, handling control
divergence, etc.

While CPU hides most of the complexity from


programmers

41

Example: NVIDIA Tesla

Streaming Processors

Single-precision FP and integer units


Each SP is fine-grained multithreaded

Warp: group of 32 threads

Executed in parallel,
SIMD style

8 SPs
4 clock cycles

Hardware contexts
for 24 warps

Registers, PCs,
Chapter 6 Parallel Processors from Client to Cloud 42

Comparison: G80, GT200, Fermi

43

Three classes of hardware:


GeForce, Quadro, and Tesla

43 43

Comparison: Kepler vs Fermi


Kepler

Fermi

Fermi

Fermi

GTX 680

GTX 580

GTX 560 Ti

GTX 480

1536

512

384

480

Texture Units
ROPs
Core Clock
Shader Clock
Boost Clock

128
32
1006MHz
N/A
1058MHz

64
48
772MHz
1544MHz
N/A

64
32
822MHz
1644MHz
N/A

60
48
700MHz
1401MHz
N/A

Memory Clock

6.008GHz
GDDR5

4.008GHz
GDDR5

4.008GHz
GDDR5

3.696GHz
GDDR5

256-bit

384-bit

256-bit

384-bit

2GB

1.5GB

1GB

1.5GB

FP64

1/24 FP32

1/8 FP32

1/12 FP32

1/12 FP32

TDP

195W

244W

170W

250W

Transistor Count

3.5B

3B

1.95B

3B

TSMC 28nm

TSMC 40nm

TSMC 40nm

TSMC 40nm

$499

$499

$249

$499

Stream Processors

Memory Bus Width


Frame Buffer

Manufacturing
Process
Launch Price
44

44 44

Comparison: Fermi, Kepler

45

45 45

Classifying GPUs

Dont fit nicely into SIMD/MIMD model

Conditional execution in a thread allows an


illusion of MIMD

But with performance degredation


Need to write general purpose code with care

Instruction-Level
Parallelism
Data-Level
Parallelism

Static: Discovered
at Compile Time

Dynamic: Discovered
at Runtime

VLIW

Superscalar

SIMD or Vector

Tesla Multiprocessor

Chapter 6 Parallel Processors from Client to Cloud 46

GPU Memory Structures

Per-SM with L1 cache

Chapter 6 Parallel Processors from Client to Cloud 47

CUDA Memory Model Overview


Global memory (also
called device memory)
Main means of
communicating R/W Data
between host and device
Contents visible to all
threads
Long latency access

We will focus on global


memory for now
Constant and texture
memory will come later

Grid
Block (0, 0)

Block (1, 0)

Shared Memory
Registers

Registers

Thread (0, 0) Thread (1, 0)

Host

Shared Memory
Registers

Registers

Thread (0, 0) Thread (1, 0)

Global Memory

48

Putting GPUs into Perspective


Feature

Multicore with SIMD

GPU

SIMD processors

4 to 8

8 to 16

SIMD lanes/processor

2 to 4

8 to 16

Multithreading hardware support for


SIMD threads

2 to 4

16 to 32

2:1

2:1

Largest cache size

8 MB

0.75 MB

Size of memory address

64-bit

64-bit

8 GB to 256 GB

4 GB to 6 GB

Memory protection at level of page

Yes

Yes

Demand paging

Yes

No

Integrated scalar processor/SIMD


processor

Yes

No

Cache coherent

Yes

No

Typical ratio of single precision to


double-precision performance

Size of main memory

Chapter 6 Parallel Processors from Client to Cloud 49

TSMC

6.11 Real Stuff: Benchmarking and Rooflines i7 vs. Tesla

i7-960 vs. NVIDIA Tesla 280/480

Chapter 6 Parallel Processors from Client to Cloud 50

Each processor has private physical


address space
Hardware sends/receives messages
between processors

6.7 Clusters, WSC, and Other Message-Passing MPs

Message Passing Different Parallel


Computing Model

Chapter 6 Parallel Processors from Client to Cloud 51

Loosely Coupled Clusters

Network of independent computers

Each has private memory and OS


Connected using I/O system

Suitable for applications with independent tasks

E.g., Ethernet/switch, Internet

Web servers, databases, simulations,

High availability, scalable, affordable


Problems

Administration cost (prefer virtual machines)


Low interconnect bandwidth

c.f. processor/memory bandwidth on an SMP


Chapter 6 Parallel Processors from Client to Cloud 52

Sum Reduction (Again)

Sum 100,000 on 100 processors


First distribute 100 numbers to each

The do partial sums


sum = 0;
for (i = 0; i<1000; i = i + 1)
sum = sum + AN[i];

Reduction

Half the processors send, other half receive


and add
The quarter send, quarter receive and add,
Chapter 6 Parallel Processors from Client to Cloud 53

Example: Sum Reduction

half = 100;
Repeat /* parallel */
synch();
if (half%2 != 0 && Pn == 0)
sum[0] = sum[0] + sum[half-1];
/* Conditional sum needed when half is odd;
Processor0 gets missing element */
half = half/2; /* dividing line on who sums */
if (Pn < half) sum[Pn] = sum[Pn] + sum[Pn+half];
until (half == 1);
Chapter 6 Parallel Processors from Client to Cloud 54

Sum Reduction (Again)

Given send() and receive() operations


limit = 100; half = 100;/* 100 processors */
repeat
half = (half+1)/2; /* send vs. receive
dividing line */
if (Pn >= half && Pn < limit)
send(Pn - half, sum);
if (Pn < (limit/2))
sum = sum + receive();
limit = half; /* upper limit of senders */
until (half == 1); /* exit with final sum */

Send/receive also provide synchronization


Assumes send/receive take similar time to addition
Chapter 6 Parallel Processors from Client to Cloud 55

Grid Computing

Separate computers interconnected by


long-haul networks

E.g., Internet connections


Work units farmed out, results sent back

Can make use of idle time on PCs

E.g., SETI@home, World Community Grid

Chapter 6 Parallel Processors from Client to Cloud 56

Network topologies

Arrangements of processors, switches, and links

Bus

Ring

N-cube (N = 3)
2D Mesh

6.8 Introduction to Multiprocessor Network Topologies

Interconnection Networks (SKIP!)

Fully connected
Chapter 6 Parallel Processors from Client to Cloud 57

Multistage Networks

Chapter 6 Parallel Processors from Client to Cloud 58

Network Characteristics

Performance

Latency per message (unloaded network)


Throughput

Link bandwidth
Total network bandwidth
Bisection bandwidth

Congestion delays (depending on traffic)

Cost
Power
Routability in silicon
Chapter 6 Parallel Processors from Client to Cloud 59

Linpack: matrix linear algebra


SPECrate: parallel run of SPEC CPU programs

SPLASH: Stanford Parallel Applications for


Shared Memory

Mix of kernels and applications, strong scaling

NAS (NASA Advanced Supercomputing) suite

Job-level parallelism

computational fluid dynamics kernels

PARSEC (Princeton Application Repository for


Shared Memory Computers) suite

Multithreaded applications using Pthreads and


OpenMP

6.10 Multiprocessor Benchmarks and Performance Models

Parallel Benchmarks

Chapter 6 Parallel Processors from Client to Cloud 60

Code or Applications?

Traditional benchmarks

Parallel programming is evolving

Fixed code and data sets


Should algorithms, programming languages,
and tools be part of the system?
Compare systems, provided they implement a
given application
E.g., Linpack, Berkeley Design Patterns

Would foster innovation in approaches to


parallelism
Chapter 6 Parallel Processors from Client to Cloud 61

Modeling Performance

Assume performance metric of interest is


achievable GFLOPs/sec

Arithmetic intensity of a kernel

Measured using computational kernels from


Berkeley Design Patterns
FLOPs per byte of memory accessed

For a given computer, determine

Peak GFLOPS (from data sheet)


Peak memory bytes/sec (using Stream
benchmark)
Chapter 6 Parallel Processors from Client to Cloud 62

Roofline Diagram (SKIP!)

Attainable GPLOPs/sec
= Max ( Peak Memory BW Arithmetic Intensity, Peak FP Performance )

Chapter 6 Parallel Processors from Client to Cloud 63

Comparing Systems

Example: Opteron X2 vs. Opteron X4

2-core vs. 4-core, 2 FP performance/core, 2.2GHz


vs. 2.3GHz
Same memory system

To get higher performance


on X4 than X2

Need high arithmetic intensity


Or working set must fit in X4s
2MB L-3 cache

Chapter 6 Parallel Processors from Client to Cloud 64

Optimizing Performance

Optimize FP performance

Balance adds & multiplies


Improve superscalar ILP
and use of SIMD
instructions

Optimize memory usage

Software prefetch

Avoid load stalls

Memory affinity

Avoid non-local data


accesses
Chapter 6 Parallel Processors from Client to Cloud 65

Optimizing Performance

Choice of optimization depends on


arithmetic intensity of code

Arithmetic intensity is
not always fixed

May scale with


problem size
Caching reduces
memory accesses

Increases arithmetic
intensity

Chapter 6 Parallel Processors from Client to Cloud 66

Rooflines

Chapter 6 Parallel Processors from Client to Cloud 67

Benchmarks

Chapter 6 Parallel Processors from Client to Cloud 68

Performance Summary

GPU (480) has 4.4 X the memory bandwidth

GPU has 13.1 X the single precision throughout, 2.5 X


the double precision throughput

Benefits memory bound kernels

Benefits FP compute bound kernels

CPU cache prevents some kernels from becoming


memory bound when they otherwise would on GPU
GPUs offer scatter-gather, which assists with kernels with
strided data
Lack of synchronization and memory consistency support
on GPU limits performance for some kernels

Chapter 6 Parallel Processors from Client to Cloud 69

Use OpenMP:

void dgemm (int n, double* A, double* B, double* C)


{
#pragma omp parallel for
for ( int sj = 0; sj < n; sj += BLOCKSIZE )
for ( int si = 0; si < n; si += BLOCKSIZE )
for ( int sk = 0; sk < n; sk += BLOCKSIZE )
do_block(n, si, sj, sk, A, B, C);
}

6.12 Going Faster: Multiple Processors and Matrix Multiply

Multi-threading DGEMM

Chapter 6 Parallel Processors from Client to Cloud 70

Multithreaded DGEMM

Chapter 6 Parallel Processors from Client to Cloud 71

Multithreaded DGEMM

Chapter 6 Parallel Processors from Client to Cloud 72

Amdahls Law doesnt apply to parallel


computers

Since we can achieve linear speedup


But only on applications with weak scaling

6.13 Fallacies and Pitfalls

Fallacies

Peak performance tracks observed


performance

Marketers like this approach!


But compare Xeon with others in example
Need to be aware of bottlenecks
Chapter 6 Parallel Processors from Client to Cloud 73

Pitfalls

Not developing the software to take


account of a multiprocessor architecture

Example: using a single lock for a shared


composite resource

Serializes accesses, even if they could be done in


parallel
Use finer-granularity locking

Chapter 6 Parallel Processors from Client to Cloud 74

Goal: higher performance by using multiple


processors
Difficulties

Developing parallel software


Devising appropriate architectures

6.14 Concluding Remarks

Concluding Remarks

SaaS importance is growing and clusters are a


good match
Performance per dollar and performance per
Joule drive both mobile and WSC

Chapter 6 Parallel Processors from Client to Cloud 75

Concluding Remarks (cont)

SIMD and vector


operations match
multimedia applications
and are easy to
program

Chapter 6 Parallel Processors from Client to Cloud 76

You might also like