dce
2015
COMPUTER ARCHITECTURE
CSE Fall 2014
Faculty of Computer Science and Engineering
Department of Computer Engineering
BK
TP.HCM
Vo Tan Phuong
http://www.cse.hcmut.edu.vn/~vtphuong
dce
2015
Chapter 3
Data Representation
Computer Architecture – Chapter 3 © Spring 2015, CE 2
dce
2015
Presentation Outline
• Positional Number Systems
• Binary and Hexadecimal Numbers
• Base Conversions
• Binary and Hexadecimal Addition
• Binary and Hexadecimal subtraction
• Carry and Overflow
• Character Storage
• Floating Point Number
Computer Architecture – Chapter 3 © Spring 2015, CE 3
dce
2015
Positional Number Systems
Different Representations of Natural Numbers
XXVII Roman numerals (not positional)
27 Radix-10 or decimal number (positional)
110112 Radix-2 or binary number (also positional)
Fixed-radix positional representation with k digits
Number N in radix r = (dk–1dk–2 . . . d1d0)r
Value = dk–1×r k–1 + dk–2×r k–2 + … + d1×r + d0
Examples: (11011)2 = 1×24 + 1×23 + 0×22 + 1×2 + 1 = 27
(2103)4 = 2×43 + 1×42 + 0×4 + 3 = 147
Computer Architecture – Chapter 3 © Spring 2015, CE 4
dce
2015
Binary Numbers
• Each binary digit (called bit) is either 1 or 0
• Bits have no inherent meaning, can represent
– Unsigned and signed integers
– Characters
Most Least
– Floating-point numbers
Significant Bit Significant Bit
– Images, sound, etc.
7 6 5 4 3 2 1 0
1 0 0 1 1 1 0 1
• Bit Numbering 27 26 25 24 23 22 21 20
– Least significant bit (LSB) is rightmost (bit 0)
– Most significant bit (MSB) is leftmost (bit 7 in an 8-bit number)
Computer Architecture – Chapter 3 © Spring 2015, CE 5
dce
2015
Converting Binary to Decimal
• Each bit represents a power of 2
• Every binary number is a sum of powers of 2
• Decimal Value = (dn-1 2n-1) + ... + (d1 21) + (d0 20)
• Binary (10011101)2 = 27 + 24 + 23 + 22 + 1 = 157
7 6 5 4 3 2 1 0
1 0 0 1 1 1 0 1
27 26 25 24 23 22 21 20
Some common
powers of 2
Computer Architecture – Chapter 3 © Spring 2015, CE 6
dce
2015
Convert Unsigned Decimal to Binary
• Repeatedly divide the decimal integer by 2
• Each remainder is a binary digit in the translated value
least significant bit
37 = (100101)2
most significant bit
stop when quotient is zero
Computer Architecture – Chapter 3 © Spring 2015, CE 7
dce
2015
Hexadecimal Integers
• 16 Hexadecimal Digits: 0 – 9, A – F
• More convenient to use than binary numbers
Binary, Decimal, and Hexadecimal Equivalents
Computer Architecture – Chapter 3 © Spring 2015, CE 8
dce
2015
Converting Binary to Hexadecimal
Each hexadecimal digit corresponds to 4 binary bits
Example:
Convert the 32-bit binary number to hexadecimal
1110 1011 0001 0110 1010 0111 1001 0100
Solution:
E B 1 6 A 7 9 4
1110 1011 0001 0110 1010 0111 1001 0100
Computer Architecture – Chapter 3 © Spring 2015, CE 9
dce
2015
Converting Hexadecimal to Decimal
• Multiply each digit by its corresponding power of 16
Value = (dn-1 16n-1) + (dn-2 16n-2) + ... + (d1 16) + d0
• Examples:
(1234)16 = (1 163) + (2 162) + (3 16) + 4 =
Decimal Value 4660
(3BA4)16 = (3 163) + (11 162) + (10 16) + 4 =
Decimal Value 15268
Computer Architecture – Chapter 3 © Spring 2015, CE 10
dce
2015
Converting Decimal to Hexadecimal
Repeatedly divide the decimal integer by 16
Each remainder is a hex digit in the translated value
least significant digit
most significant digit
stop when
quotient is zero
Decimal 422 = 1A6 hexadecimal
Computer Architecture – Chapter 3 © Spring 2015, CE 11
dce
2015
Integer Storage Sizes
Byte 8
Half Word 16 Storage Sizes
Word 32
Double Word 64
Storage Type Unsigned Range Powers of 2
Byte 0 to 255 0 to (28 – 1)
Half Word 0 to 65,535 0 to (216 – 1)
Word 0 to 4,294,967,295 0 to (232 – 1)
Double Word 0 to 18,446,744,073,709,551,615 0 to (264 – 1)
What is the largest 20-bit unsigned integer?
Answer: 220 – 1 = 1,048,575
Computer Architecture – Chapter 3 © Spring 2015, CE 12
dce
2015
Binary Addition
• Start with the least significant bit (rightmost bit)
• Add each pair of bits
• Include the carry in the addition, if present
carry 1 1 1 1
0 0 1 1 0 1 1 0 (54)
+ 0 0 0 1 1 1 0 1 (29)
0 1 0 1 0 0 1 1 (83)
bit position: 7 6 5 4 3 2 1 0
Computer Architecture – Chapter 3 © Spring 2015, CE 13
dce
2015
Hexadecimal Addition
• Start with the least significant hexadecimal digits
• Let Sum = summation of two hex digits
• If Sum is greater than or equal to 16
– Sum = Sum – 16 and Carry = 1
• Example:
carry: 1 1 1
1C37286A
+ A + B = 10 + 11 = 21
9395E84B Since 21 ≥ 16
Sum = 21 – 16 = 5
AFCD10B5 Carry = 1
Computer Architecture – Chapter 3 © Spring 2015, CE 14
dce
2015
Signed Integers
• Several ways to represent a signed number
– Sign-Magnitude
– Biased
– 1's complement
– 2's complement
• Divide the range of values into 2 equal parts
– First part corresponds to the positive numbers (≥ 0)
– Second part correspond to the negative numbers (< 0)
• Focus will be on the 2's complement representation
– Has many advantages over other representations
– Used widely in processors to represent signed integers
Computer Architecture – Chapter 3 © Spring 2015, CE 15
dce
2015
Two's Complement Representation
Positive numbers 8-bit Binary Unsigned Signed
value value value
Signed value = Unsigned value
00000000 0 0
Negative numbers 00000001 1 +1
00000010 2 +2
Signed value = Unsigned value – 2n
... ... ...
n = number of bits
01111110 126 +126
Negative weight for MSB 01111111 127 +127
Another way to obtain the signed 10000000 128 -128
value is to assign a negative weight
10000001 129 -127
to most-significant bit
... ... ...
1 0 1 1 0 1 0 0
11111110 254 -2
-128 64 32 16 8 4 2 1
11111111 255 -1
= -128 + 32 + 16 + 4 = -76
Computer Architecture – Chapter 3 © Spring 2015, CE 16
dce
2015
Forming the Two's Complement
starting value 00100100 = +36
step1: reverse the bits (1's complement) 11011011
step 2: add 1 to the value from step 1 + 1
sum = 2's complement representation 11011100 = -36
Sum of an integer and its 2's complement must be zero:
00100100 + 11011100 = 00000000 (8-bit sum) Ignore Carry
Another way to obtain the 2's complement: Binary Value
least
Start at the least significant 1 = 00100 1 00 significant 1
Leave all the 0s to its right unchanged 2's Complement
Complement all the bits to its left = 11011 1 00
Computer Architecture – Chapter 3 © Spring 2015, CE 17
dce
2015
Sign Bit
• Highest bit indicates the sign
• 1 = negative Sign bit
• 0 = positive 1 1 1 1 0 1 1 0
Negative
0 0 0 0 1 0 1 0 Positive
For Hexadecimal Numbers, check most significant digit
If highest digit is > 7, then value is negative
Examples: 8A and C5 are negative bytes
B1C42A00 is a negative word (32-bit signed integer)
Computer Architecture – Chapter 3 © Spring 2015, CE 18
dce
2015
Sign Extension
Step 1: Move the number into the lower-significant bits
Step 2: Fill all the remaining higher bits with the sign bit
• This will ensure that both magnitude and sign are correct
• Examples
– Sign-Extend 10110011 to 16 bits
10110011 = -77 11111111 10110011 = -77
– Sign-Extend 01100010 to 16 bits
01100010 = +98 00000000 01100010 = +98
• Infinite 0s can be added to the left of a positive number
• Infinite 1s can be added to the left of a negative number
Computer Architecture – Chapter 3 © Spring 2015, CE 19
dce
2015
Two's Complement of a Hexadecimal
• To form the two's complement of a hexadecimal
– Subtract each hexadecimal digit from 15
– Add 1
• Examples:
2's complement of 6A3D = 95C2 + 1 = 95C3
2's complement of 92F15AC0 = 6D0EA53F + 1 = 6D0EA540
2's complement of FFFFFFFF = 00000000 + 1 = 00000001
• No need to convert hexadecimal to binary
Computer Architecture – Chapter 3 © Spring 2015, CE 20
dce
2015
Binary Subtraction
• When subtracting A – B, convert B to its 2's complement
• Add A to (–B)
borrow: 1 1 1 carry: 1 1 1 1
01001101 01001101
– 00111010 + 11000110 (2's complement)
00010011 00010011 (same result)
• Final carry is ignored, because
– Negative number is sign-extended with 1's
– You can imagine infinite 1's to the left of a negative number
– Adding the carry to the extended 1's produces extended zeros
Computer Architecture – Chapter 3 © Spring 2015, CE 21
dce
2015
Hexadecimal Subtraction
16 + 5 = 21
Borrow: 1 1 1 Carry: 1 1 1 1 1
B14FC675 B14FC675
- +
839EA247 7C615DB9 (2's complement)
2DB1242E 2DB1242E (same result)
• When a borrow is required from the digit to the left, then
Add 16 (decimal) to the current digit's value
• Last Carry is ignored
Computer Architecture – Chapter 3 © Spring 2015, CE 22
dce
2015
Ranges of Signed Integers
For n-bit signed integers: Range is -2n–1 to (2n–1 – 1)
Positive range: 0 to 2n–1 – 1
Negative range: -2n–1 to -1
Storage Type Unsigned Range Powers of 2
Byte –128 to +127 –27 to (27 – 1)
Half Word –32,768 to +32,767 –215 to (215 – 1)
Word –2,147,483,648 to +2,147,483,647 –231 to (231 – 1)
–9,223,372,036,854,775,808 to
Double Word –263 to (263 – 1)
+9,223,372,036,854,775,807
Practice: What is the range of signed values that may be stored in 20 bits?
Computer Architecture – Chapter 3 © Spring 2015, CE 23
dce
2015
Carry and Overflow
• Carry is important when …
– Adding or subtracting unsigned integers
– Indicates that the unsigned sum is out of range
– Either < 0 or >maximum unsigned n-bit value
• Overflow is important when …
– Adding or subtracting signed integers
– Indicates that the signed sum is out of range
• Overflow occurs when
– Adding two positive numbers and the sum is negative
– Adding two negative numbers and the sum is positive
– Can happen because of the fixed number of sum bits
Computer Architecture – Chapter 3 © Spring 2015, CE 24
dce
2015
Carry and Overflow Examples
• We can have carry without overflow and vice-versa
• Four cases are possible (Examples are 8-bit numbers)
1 1 1 1 1 1
0 0 0 0 1 1 1 1 15 0 0 0 0 1 1 1 1 15
+ +
0 0 0 0 1 0 0 0 8 1 1 1 1 1 0 0 0 248 (-8)
0 0 0 1 0 1 1 1 23 0 0 0 0 0 1 1 1 7
Carry = 0 Overflow = 0 Carry = 1 Overflow = 0
1 1 1 1
0 1 0 0 1 1 1 1 79 1 1 0 1 1 0 1 0 218 (-38)
+ +
0 1 0 0 0 0 0 0 64 1 0 0 1 1 1 0 1 157 (-99)
1 0 0 0 1 1 1 1 143 0 1 1 1 0 1 1 1 119
(-113)
Carry = 0 Overflow = 1 Carry = 1 Overflow = 1
Computer Architecture – Chapter 3 © Spring 2015, CE 25
dce
2015
Range, Carry, Borrow, and Overflow
• Unsigned Integers: n-bit representation
Numbers < min Numbers > max
Borrow = 1 Carry = 1
Finite Set of Unsigned Integers
Subtraction Addition
min = 0 max = 2n–1
• Signed Integers: n-bit 2's complement representation
Numbers < min Numbers > max
Negative Positive
Finite Set of Signed Integers
Overflow Overflow
min = -2 n-1
0 max = 2n-1–1
Computer Architecture – Chapter 3 © Spring 2015, CE 26
dce
2015
Character Storage
• Character sets
– Standard ASCII: 7-bit character codes (0 – 127)
– Extended ASCII: 8-bit character codes (0 – 255)
– Unicode: 16-bit character codes (0 – 65,535)
– Unicode standard represents a universal character set
• Defines codes for characters used in all major languages
• Used in Windows-XP: each character is encoded as 16 bits
– UTF-8: variable-length encoding used in HTML
• Encodes all Unicode characters
• Uses 1 byte for ASCII, but multiple bytes for other characters
• Null-terminated String
– Array of characters followed by a NULL character
Computer Architecture – Chapter 3 © Spring 2015, CE 27
dce
2015
Printable ASCII Codes
0 1 2 3 4 5 6 7 8 9 A B C D E F
2 space ! " # $ % & ' ( ) * + , - . /
3 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
4 @ A B C D E F G H I J K L M N O
5 P Q R S T U V W X Y Z [ \ ] ^ _
6 ` a b c d e f g h i j k l m n o
7 p q r s t u v w x y z { | } ~ DEL
Examples:
ASCII code for space character = 20 (hex) = 32 (decimal)
ASCII code for 'L' = 4C (hex) = 76 (decimal)
ASCII code for 'a' = 61 (hex) = 97 (decimal)
Computer Architecture – Chapter 3 © Spring 2015, CE 28
dce
2015
Control Characters
• The first 32 characters of ASCII table are used for control
• Control character codes = 00 to 1F (hexadecimal)
– Not shown in previous slide
• Examples of Control Characters
– Character 0 is the NULL character used to terminate a string
– Character 9 is the Horizontal Tab (HT) character
– Character 0A (hex) = 10 (decimal) is the Line Feed (LF)
– Character 0D (hex) = 13 (decimal) is the Carriage Return (CR)
– The LF and CR characters are used together
• They advance the cursor to the beginning of next
line
• One control character appears at end of ASCII table
– Character 7F (hex) is the Delete (DEL) character
Computer Architecture – Chapter 3 © Spring 2015, CE 29
dce
2015
Representing Big (and Small) Numbers
• What if we want to encode the approx. age of the earth?
4,600,000,000 or 4.6 x 109
or the weight in kg of one a.m.u. (atomic mass unit)
0.0000000000000000000000000166 or 1.6 x 10-27
There is no way we can encode either of the above in a 32-bit
integer.
• Floating point representation (-1)sign x F x 2E
– Still have to fit everything in 32 bits (single precision)
s E (exponent) F (fraction)
1 bit 8 bits 23 bits
– The base (2, not 10) is hardwired in the design of the FPALU
– More bits in the fraction (F) or the exponent (E) is a trade-off
between precision (accuracy of the number) and range (size of
the number)
Computer Architecture – Chapter 3 © Spring 2015, CE 30
dce
2015
Real Numbers
• Conversion from real binary to real decimal
• – 1101.10112 = – 13.687510
• since:
• 11012 = 23 + 22 + 20 = 1310 and
• 0.10112 = 2-1 + 2-3 + 2-4 = 0.5 + 0.125 + 0.0625 = 0.687510
• Conversion from real decimal to real binary:
• +927.4510 = + 1110011111.01 1100 1100 1100 …..
• 927/2 = 463 + 1/2 <-LSB 0.45 x 2 = 0.9 + 0 <-
MSB
• 463/2 = 231 + 1/2 0.9 x 2 = 0.8 + 1
• 231/2 = 115 + 1/2 0.8 x 2 = 0.6 + 1
• 115/2 = 57 + 1/2 0.6 x 2 = 0.2 + 1
• 57/2 = 28 + 1/2 0.2 x 2 = 0.4 + 0
• 28/2 = 14 + 0 0.4 x 2 = 0.8 + 0
• 14/2 = 7 + 0 0.8 x 2 = 0.6 + 1
• 7/2 = 3 + 1/2 0.6 x 2 = 0.2 + 1
• 3/2 = 1 + 1/2 0.2 x 2 = 0.4 + 0
• 1/2 = 0 + 1/2 0.4 x 2 = 0.8 + 0 ……
Computer Architecture – Chapter 3 © Spring 2015, CE 31
dce
2015
Exception Events in Floating Point
• Overflow (floating point) happens when a positive
exponent becomes too large to fit in the exponent field
• Underflow (floating point) happens when a negative
exponent becomes too large to fit in the exponent field
-∞ +∞
- largestE -smallestF - largestE +smallestF
+ largestE -largestF + largestE +largestF
• One way to reduce the chance of underflow or
overflow is to offer another format that has a larger
exponent field
– Double precision – takes two MIPS words
s E (exponent) F (fraction)
1 bit 11 bits 20 bits
F (fraction continued)
32 bits
Computer Architecture – Chapter 3 © Spring 2015, CE 32
dce
2015
IEEE 754 FP Standard
• Most (all?) computers these days conform to the IEEE 754
floating point standard (-1)sign x (1+F) x 2E-bias
– Formats for both single and double precision
– F is stored in normalized format where the msb in F is 1 (so there is
no need to store it!) – called the hidden bit
– To simplify sorting FP numbers, E comes before F in the word and E
is represented in excess (biased) notation where the bias is 127
(1023 for double precision) so the most negative is 00000001 = 21-127
= 2-126 and the most positive is 11111110 = 2254-127 = 2+127
• Examples (in normalized format)
– Smallest+: 0 00000001 1.00000000000000000000000 = 1 x 21-127
– Zero: 0 00000000 00000000000000000000000 = true 0
– Largest+: 0 11111110 1.11111111111111111111111 =
2-2-23 x 2254-127
– 1.02 x 2-1 = 0 01111110 1.00000000000000000000000
– 0.7510 x 24 = 0 10000010 1.10000000000000000000000
Computer Architecture – Chapter 3 © Spring 2015, CE 34
dce
2015
IEEE 754 FP Standard Encoding
• Special encodings are used to represent unusual events
– ± infinity for division by zero
– NAN (not a number) for the results of invalid operations such as
0/0
– True zero is the bit string all zero
Single Precision Double Precision Object
E (8) F (23) E (11) F (52) Represented
0000 0000 0 0000 … 0000 0 true zero (0)
0000 0000 nonzero 0000 … 0000 nonzero ± denormalized
number
1- 0111 1111 to anything 0111 …1111 to anything ± floating point
>254 +127,-126 +1023,-1022 number
1111 1111 +0 1111 … 1111 -0 ± infinity
1111 1111 nonzero 1111 … 1111 nonzero not a number
(NaN)
Computer Architecture – Chapter 3 © Spring 2015, CE 35
dce
2015
Support for Accurate Arithmetic
• IEEE 754 FP rounding modes
– Always round up (toward +∞)
– Always round down (toward -∞)
– Truncate
– Round to nearest even (when the Guard || Round || Sticky
are 100) – always creates a 0 in the least significant (kept)
bit of F
• Rounding (except for truncation) requires the
hardware to include extra F bits during calculations
– Guard bit – used to provide one F bit when shifting left to
normalize a result (e.g., when normalizing F after division
or subtraction)
– Round bit – used to improve rounding accuracy
– Sticky bit – used to support Round to nearest even; is set
to a 1 whenever a 1 bit shifts (right) through it (e.g., when
aligning F during addition/subtraction)
F = 1 . xxxxxxxxxxxxxxxxxxxxxxx G R S
Computer Architecture – Chapter 3 © Spring 2015, CE 36
dce
2015
Floating Point Addition
• Addition (and subtraction)
(F1 2E1) + (F2 2E2) = F3 2E3
– Step 0: Restore the hidden bit in F1 and in F2
– Step 1: Align fractions by right shifting F2 by E1 - E2 positions
(assuming E1 E2) keeping track of (three of) the bits shifted
out in G R and S
– Step 2: Add the resulting F2 to F1 to form F3
– Step 3: Normalize F3 (so it is in the form 1.XXXXX …)
• If F1 and F2 have the same sign F3 [1,4) 1 bit right shift F3 and
increment E3 (check for overflow)
• If F1 and F2 have different signs F3 may require many left shifts
each time decrementing E3 (check for underflow)
– Step 4: Round F3 and possibly normalize F3 again
– Step 5: Rehide the most significant bit of F3 before storing the
result
Computer Architecture – Chapter 3 © Spring 2015, CE 37
dce
2015
Floating Point Addition Example
• Add (0.5 = 1.0000 2-1) + (-0.4375 = -1.1100 2-2)
Step 0: Hidden bits restored in the representation above
Step 1: Shift significand with the smaller exponent (1.1100) right
until its exponent matches the larger exponent (so once)
Step 2: Add significands
1.0000 + (-0.111) = 1.0000 – 0.111 = 0.001
Step 3: Normalize the sum, checking for exponent over/underflow
0.001 x 2-1 = 0.010 x 2-2 = .. = 1.000 x 2-4
Step 4: The sum is already rounded, so we’re done
Step 5: Rehide the hidden bit before storing
Computer Architecture – Chapter 3 © Spring 2015, CE 39
dce
2015
Floating Point Multiplication
• Multiplication
(F1 2E1) x (F2 2E2) = F3 2E3
– Step 0: Restore the hidden bit in F1 and in F2
– Step 1: Add the two (biased) exponents and subtract the bias
from the sum, so E1 + E2 – 127 = E3
also determine the sign of the product (which depends on the
sign of the operands (most significant bits))
– Step 2: Multiply F1 by F2 to form a double precision F3
– Step 3: Normalize F3 (so it is in the form 1.XXXXX …)
• Since F1 and F2 come in normalized F3 [1,4) 1 bit right shift F3
and increment E3
• Check for overflow/underflow
– Step 4: Round F3 and possibly normalize F3 again
– Step 5: Rehide the most significant bit of F3 before storing the
result
Computer Architecture – Chapter 3 © Spring 2015, CE 40
dce
2015
Floating Point Multiplication Example
• Multiply (0.5 = 1.0000 2-1) x (-0.4375 = -1.1100 2-2)
Step 0: Hidden bits restored in the representation above
Step 1: Add the exponents (not in bias would be -1 + (-2) = -3
and in bias would be (-1+127) + (-2+127) – 127 = (-1
-2) + (127+127-127) = -3 + 127 = 124
Step 2: Multiply the significands
1.0000 x 1.110 = 1.110000
Step 3: Normalized the product, checking for exp over/underflow
1.110000 x 2-3 is already normalized
Step 4: The product is already rounded, so we’re done
Step 5: Rehide the hidden bit before storing
Computer Architecture – Chapter 3 © Spring 2015, CE 42