[go: up one dir, main page]

0% found this document useful (0 votes)
107 views28 pages

CSC340 - HW3

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 28

CSC340 Digital Logic Design -

Assignment 3
First Name

Last Name

ID#

Email Address

How to submit your Assignment


After filling all the parts in this file, please follow the following steps.
1) Add your name and ID to the first page.
2) Save the file in the original format (Docx or Doc)
(please do not convert to other file formats e.g. PDF, ZIP, RAR, …).
3) Rename the file as
CSC340 – HW3 - ID – YOUR Last Name - YOUR First Name.docx
Example: CSC340 – HW3 - 234566435 - Smith - John.docx
4) Upload the file and submit it (only using Blackboard)
Total Points: 35 points
Question 1: (3 pts)
If the floating-point number representation on a certain system has a sign bit, a 3-bit
exponent and a 4-bit significand:
a) What is the largest positive and the smallest positive number that can be stored on this
system if the storage is normalized? (Assume no bits are implied, there is no biasing,
exponents use two's complement notation, and exponents of all zeros and all ones are
allowed.)
b) What bias should be used in the exponent if we prefer all exponents to be non-negative?
Why would you choose this bias?
Answer:
a)
Largest Positive:
1101.101 is normalized ~ 1.101101 x 2 3

1 sign bit 3-bit exponent with a 4-bit significand


0.11112 x 2 = 111.12 = 7.5
3

Smallest Positive:
If the storage is returned to normal, the minimum
positive floating point # is…
0.12 x 2 = 0.000012 = 1/32 = 0.03125
−4

b) The bias would be 4.


The smallest negative would be the largest positive with a sign
bit 1.

Therefore, 1 011 111 is our floating-point representation of the


smallest negative.

Explanation:
In this case, bias is selected such that the largest exponent
value that can be represented using a given number of bits
is non-negative.
In this case, with a 3-bit exponent, the largest unsigned
integer value that can be represented is 7. That means an
exponent of 7 – 3 = 4. That also means that the largest
exponent value that can be represented with a 3-bit
exponent and a bias of 3 is 4, which is non-negative.

Why would you choose this bias?


I would choose this bias, a non-negative bias, for several
reasons:
 First, it will enable efficient use of the available bits for the
exponent, since a non-negative bias can represent a larger range
of exponent values than a negative bias.

 Second, it makes it easier to compare and perform arithmetic


operations on exponent values (because they are non-negative).

 Third, it enables an easier or very efficient utilization of floating-


point operations in hardware (since it eliminates our need for
handling negative exponents).

The bottom-line is: a bias of 3 is chosen for a 3-bit exponent


here because it enables the largest range of exponent values to
be non-negative—meaning: it is more efficient, simpler to
handle and use.

Question 2: (3 pts)
Assume we are using the simple model for floating-point representation as given in this
book (the representation uses a 14-bit format, 5 bits for the exponent with a bias of 15, a
normalized mantissa of 8 bits, and a single sign bit for the number):
a) Show how the computer would represent the numbers 100.0 and 0.25 using this
floating-point format.
b) Show how the computer would add the two floating-point numbers in part a by
changing one of the numbers so they are both expressed using the same power of 2.
c) Show how the computer would represent the sum in part b using the given floating point
representation. What decimal value for the sum is the computer actually storing?
Explain.
Answer:

a) I convert 100.0 in the binary form.


10010 = 11001002

Our binary representation of 100.0 is 1100100.


Now, I convert the binary number to the power of 2.
1100100.0 = 0. 1100100.0 * 27.

Here, the value of the exponent is 7.


Now, convert the exponent into 15 bits excess. I add 15
in 7, 15 + 7 = 22. 222 = 101102.

So, a 14-bit floating point representation of 100.0 is:

Sign bit = 0
Exponent (5 bits) = 10110.
Significant bits of 8 = 11001000.

0 1 0 1 1 0 1 1 0 0 1 0 0 0

01011011001000

a (part 2) I convert 0.25 in the binary form.


0.2510 = 0.012

Our binary representation of 0.25 is 0.01


Now, I convert the binary number to the power of 2.
0.01 = 0.1 * 2-1.
Here, the value of the exponent is -1.
Now, convert the exponent into 15 bits excess. I add 15
in -1, 15 - 1 = 14. 142 = 011102.

So, a 14-bit floating point representation of 0.25 is:

Sign bit = 0
Exponent (5 bits) = 01110.
Significant bits of 8 = 10000000.

0 0 1 1 1 0 1 0 0 0 0 0 0 0

00111010000000

b) 100.0 = .11001000 * 27
0.25 = .1* 2-1

0.25 ~ 0.000000001 * 27

.11001000 * 27 + 0.000000001 * 27 = 110010001 * 27

c) The sign bit would be 0 (the sum obtained is positive).


The value of the exponent is 7.
I convert the exponent to the 15-bit excess by adding…
15 + 7 = 22

The binary representation of 22 is 10110.


Significant bits are 11001000.

The 14-bit floating point representation is:

0 1 0 1 1 0 1 1 0 0 1 0 0 0

.1100100 * 27 = 1100100

Our decimal representation of the obtained sum is


100.

Question 3: (2 pts)
What causes divide underflow and what can be done about it?
Answer:
In arithmetic terms, “divide overflow” is caused by a
situation where the divisor is much smaller than the
dividend. That results in division that equates to a
“division by zero” error. To avoid it, you could use
repeated subtraction.

In computer terms, “divide overflow” is caused by the same


situation: when the divisor is much smaller than the dividend.
That means a computer might not be able to reconcile the two
numbers with huge differences in magnitudes; and therefore,
the smaller quantity is then represented by zero. The result of
the division then becomes the equivalent of a “division by zero
error.” The “divide underflow” could be avoided, again, by
using repeated subtraction.

Question 4: (4 pts)
Let a = 1.0 29, b = - 1.0 29 and c = 1.0 21. Using the floating-point model described in
the text (the representation uses a 14-bit format, 5 bits for the exponent with a bias of 15, a
normalized mantissa of 8 bits, and a single sign bit for the number), perform the following
calculations, paying close attention to the order of operations. What can you say about the
algebraic properties of floating-point arithmetic in our finite model? Do you think this
algebraic anomaly holds under multiplication as well as addition?
b + (a + c) =
(b + a) + c =
Answer:
Let a = 1.0 * 29, b = - 1.0 * 29 and c = 1.0 * 21

I convert in 14-bit binary model:


1 bit …sign
5 bit …exponent (bias = 2 k-1 -1  25-1  15)
8 bit…Mantissa
a = 1.0 * 29
= 1 * Mantissa x 2exponent

Compare with general equation


Sign bit = 0 (we have a positive number)
Mantissa = 0
Exponent = (9)10

To convert … in binary add bias


= 9 + 15
= (11000)2

Then our number is:


a = 0 11000 00000000 in binary form.

b = - 1.0 * 29

like part a, but positive sign is changed with a “-“


*Here, only our sign bit changes.

Therefore, our number is: 1 11000 00000000


{ Sign bit Exponent Mantissa

c = 1.0 * 21
sign bit = 0
Mantissa = 0
Exponent = 1

Convert in binary
= 1 + 15 = (10000)2
Therefore, our number is : 0 10000 00000000
c = 0 10000 00000000
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

b + (a + c) = b + (a + c)
for (a + c)  0 11000 00000000
0 10000 00000000
Therefore, (a + c) = 0 11000 00000000
b + (a + c)  1 11000 00000000
0 11000 00000000

Therefore, b + (a + c) = 1 11000 00000000

(b + a) + c ≠ b + (a + c)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 Floating point overflow & underflow can drive a
program
crash.

An Overflow happens if the result is too big to fit in


the available digits.

That means I cannot assume (a + b) + c = a + (b + c)

Due to truncated bits, we cannot assume it’s associative


or
distributive.

Yes, I cannot assume Associative or Distributive Property


On Floating Point number.
Question 5: (4 pts)
Show how each of the following floating point values would be stored using IEEE-754 single
precision (be sure to indicate the sign bit, the exponent, and the significand fields):
a) 12.5 b) −1.5 c) 0.75 d) 26.625
Answer:
a) For “12.5” …

In IEEE 754 single-precision, it should be 32 bits long

Convert 12.5 into binary: 1.1001 * 23

Exponent (add 127 to exponent of 2) ~~ 3 + 127 = 130


Here, the binary of 130 is: 10000010

Therefore, I obtain:

0 10000010 100100000000000000000000

0 10000010.100100000000000000000000

b) For “-1.5” …

1.5 in binary: -1.1 * 20

Exponent: 0 + 127 = 127

The binary of 127 will be: 01111111

Our answer … (note: sign bit is 1 in this case, as it is “-“ #)


:

1 01111111.10000000000000000000000

c) For 0.75

The binary of 0.75: 1.1 * 2-1

Exponent: -1 + 127 = 126

Binary of 126: = 01111110

Our answer: 0 01111110.1000000...0

d) For 26.625

Binary of 26.625: 1.1010101 * 24

Exponent: 4 + 127 = 131

The binary of 132: 10 000 0 11

Our answer: 0 10000011.1010101…0

Question 6: (4 pts)
Show how each of the following floating point values would be stored using IEEE-754 double
precision (be sure to indicate the sign bit, the exponent, and the significand fields):
a) 12.5 b) −1.5 c) 0.75 d) 26.625
Answer:
For our IEEE 754(Single precision):

- 1 bit is used for sign; 8 bits are used for exponent ;


23 bits are used for Mantissa fields. Our bias is 127.

- When I store a number via single precision, I have to start


by adding bias to the exponent; then store our exponent
in exponent field. Doing this to makes the exponent
positive.

a) 12.5 = 1100.1 = 1.1001 x 23

The exponent is 3 and mantissa is 1001.


Note: Mantissa has 23 bits. After 1001, fill all 19 bits as
0.

I do not directly store 3 in exponent field.


The exponent field = bias + exponent = 127 + 3 = 130.
I convert 130 in binarv. 130=10000010.
12.5 is a positive number so sign bit = 0.

sign (1 bit) =0
exponent(8 bits) = 10000010
mantissa(23 bits) = 10010000----0

b). -1.5=
1.5 = 1.1 = 1.1 x 20

The exponent is O and mantissa is 1.


Note: Fill 1 in mantissa. Fill 22 bits as 0.
I do not directly store O in exponent field.
The exponent field = bias + exponent = 127 + 0 = 127.
I convert 127 in binary. 127 = 01111111
-1.5 is a negative number so sign bit = 1

Sign =1
exponent = 01111111
mantissa = 10000-----0

c). 0.75=

The exponent is -1 and mantissa is 1.


Note: Fill 1 in mantissa. Fill 22 bits as 0.

I do not directly store -1 in exponent field.


The exponent field = bias + exponent = 127 -1 = 126.
I convert 126 in binary. 126 = 01111110.
0.75 is a positive number so sign bit = 0

Sign =0
exponent = 01111110
mantissa = 10000-----0

d) 26.625

26.625 = 11010 = 1.1010 x 24


The exponent is 4 and mantissa is 1010.
Note: Fill 1010 in mantissa. Fill 19 bits as 0.

I do not directly store 4 in exponent field.

The exponent field = bias + exponent = 127 +4 = 131.


I convert 131 in binary. 131 = 10000011.
26.626 is a positive number so sign bit = 0

Sign =0
exponent = 10000011
mantissa = 1010000-----0

Question 7: (4 pts)
a) The ASCII code for the letter A is 1000001, and the ASCII code for the letter a is 1100001.
Given that the ASCII code for the letter G is 1000111, without looking at Table 2.7, what is
the ASCII code for the letter g?
b) The EBCDIC code for the letter A is 1100 0001, and the EBCDIC code for the letter a is
1000 0001. Given that the EBCDIC code for the letter G is 1100 0111, without looking at
Table 2.6, what is the EBCDIC code for the letter g?
c) The ASCII code for the letter A is 1000001, and the ASCII code for the letter a is 1100001.
Given that the ASCII code for the letter Q is 1010001, without looking at Table 2.7, what
is the ASCII code for the letter q?
d) The EBCDIC code for the letter J is 1101 0001, and the EBCDIC code for the letter j is
1001 0001. Given that the EBCDIC code for the letter Q is 1101 1000, without looking at
Table 2.6, what is the EBCDIC code for the letter q?
e) In general, if you were going to write a program to convert uppercase ASCII characters
to lowercase, how would you do it? Looking at Table 2.6, could you use the same
algorithm to convert uppercase EBCDIC letters to lowercase?
f) If you were tasked with interfacing an ECIDIC-based computer with an ASCII or Unicode
computer, what would be the best way to convert the EBCIDIC characters to ASCII
characters?
Answer:

a) Find the ASCII code for letter g:

STEP-1: Find the decimal values for letters A and a.

ASCII code for letter ”A” is 1000001.


Convert the ASCII code for letter A into decimal value.
1000001 = 1x26 + 0x25 + 0x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x64 +0x32 +0x16 +0x8 +0x4 +0x2 +1x1
= 64 + 0 + 0 + 0 + 0 + 0 + 1
= 65

ASCII code for letter ”a” is 1100001.


Convert the ASCII code for letter ”a” into decimal value.
1100001 = 1x26 + 1x25 + 0x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x64 +1x32 +0x16 +0x8 +0x4 +0x2 +1x1
= 64 + 32 + 0 + 0 + 0 + 0 + 1
= 97

STEP-2: Find the difference b/w the decimal value of a & A.


Decimal value of a = 97
Decimal value of A = 65

STEP-3: Convert the ASCII code for letter G into decimal value.
ASCII code for letter G is 1000111
1000111 = 1x26 + 0x25 + 0x24 + 0x23 + 1x22 + 1x21 +1x20
= 1x64 + 0x32 + 0x16 + 0x8 + 1x4 + 1x2 + 1x1
= 64 + 0 + 0 + 0 + 4 + 2 + 1
= 71

STEP-4: Add the decimal value of G to the difference found in step 3 for:
Decimal value of g.
Decimal value of G = 71
Difference = 32
Binary code of letter g = 71 + 32 = 103

STEP-5: Convert the decimal value of g 103 into binary code.

2 103

2 51 1

2 25 1

2 13 1

2 6 0

2 3 0

1 1

Therefore, the ASCII code for letter “g” is 1100111.

b) The Extended Binary Coded Decimal Interchange Code is an 8-


bit alphanumeric code which is used in mainframe applications.

STEP-1: EBCDIC code for letter A is 11000001

Convert the EBCDIC code for letter A into decimal value.


11000001 = 1x27 + 1x26 + 0x25 + 0x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x128 +1x64 +0x32 +0x16 +0x8 +0x4 +0x2 + 1x1
= 128 + 64 + 0 + 0 + 0 + 0 + 0 + 1
= 193

EBCDIC code for letter a is 10000001


Convert the EBCDIC code for letter “a” into decimal value.
10000001 = 1x27 + 0x26 + 0x25 + 0x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x128 +0x64 +0x32 +0x16 +0x8 +0x4 +0x2 + 1x1
= 128 + 0 + 0 + 0 + 0 + 0 + 0 + 1
= 129

STEP-2: Find the difference b/w the decimal value of A & a.


Decimal value of a = 129
Decimal value of A = 193
Difference = 193 – 129 = 6.

STEP-3: Convert the EBCDIC code for letter G into decimal value.
EBCDIC code for letter G is 11000111
11000111= 1x27 +1x26 + 0x25 + 0x24 + 0x23 + 1x22 + 1x21 +1x20
= 1x128 +1x64 +0x32 +0x16 +0x8 +1x4 +1x2 + 1x1
= 128 + 64 + 0 + 0 + 0 + 4 + 2 + 1
= 199

STEP-4: Subtract the decimal value of G from the difference found in


step 3
to get the decimal value of g.
199 – 64 = 135.

STEP-5: Convert the decimal value of g 135 into binary code.


2 135

2 67 1

2 33 1

2 16 1

2 8 0

2 4 0

2 2 0

1 0

Therefore, the EBCDIC code for letter “g” is 10000111.

c) Find the ASCII code for letter q:

STEP-1: Find the decimal values for letters A and a.

ASCII code for letter ”A” is 1000001.


Convert the ASCII code for letter A into decimal value.
1000001 = 1x26 + 0x25 + 0x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x64 +0x32 +0x16 +0x8 +0x4 +0x2 +1x1
= 64 + 0 + 0 + 0 + 0 + 0 + 1
= 65

ASCII code for letter ”a” is 1100001.


Convert the ASCII code for letter ”a” into decimal value.
1100001 = 1x26 + 1x25 + 0x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x64 +1x32 +0x16 +0x8 +0x4 +0x2 +1x1
= 64 + 32 + 0 + 0 + 0 + 0 + 1
= 97

STEP-2: Find the difference b/w the decimal value of a & A.


Decimal value of a = 97
Decimal value of A = 65
Difference = 97 – 65 = 32

STEP-3: Convert the ASCII code for letter Q into decimal value.
ASCII code for letter G is 1010001
1010001 = 1x26 + 0x25 + 1x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x64 + 0x32 + 1x16 + 0x8 + 0x4 + 0x2 + 1x1
= 64 + 0 + 16 + 0 + 0 + 0 + 1
= 81

STEP-4: Add the decimal value of Q to the difference found in step 3 for:
Decimal value of q.
Decimal value of Q = 81
Difference = 32
Binary code of letter q = 81 + 32 = 113

STEP-5: Convert the decimal value of q 113 into binary code.


2 113

2 56 1

2 28 0

2 14 0

2 7 0

2 3 1

1 1

Therefore, the ASCII code for letter “q” is 1110001.

d ) The EBCDIC is an 8-bit alphanumeric code used in mainframe


applications.

STEP-1: EBCDIC code for letter J is 11010001.

Convert the EBCDIC code for letter J into decimal value.


11010001= 1x27 + 1x26 + 0x25 + 1x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x128 +1x64 +0x32 +1x16 +0x8 +0x4 +0x2 + 1x1
= 128 +64 + 0 + 16 + 0 + 0 + 0 + 1
= 209

EBCDIC code for letter “j” is 10010001


Convert the EBCDIC code for letter “j” into decimal value.
10010001 = 1x27 + 0x26 + 0x25 + 1x24 + 0x23 + 0x22 + 0x21 +1x20
= 1x128 +0x64 +0x32 +1x16 +0x8 +0x4 +0x2 + 1x1
= 128 + 0 + 0 + 16 + 0 + 0 + 0 + 1
= 145

STEP-2: Find the difference b/w the decimal value of J & j.


Decimal value of j = 145
Decimal value of J = 209
Difference = 209 – 145 = 64.

STEP-3: Convert the EBCDIC code for letter Q into decimal


value.
EBCDIC code for letter Q is 11011000
11011000 = 1x27 +1x26 + 0x25 + 1x24 + 1x23 + 0x22 + 0x21 +1x20
= 1x128 +1x64 +0x32 +1x16 +1x8 +0x4 +0x2 + 0x1
= 128 + 64 + 0 + 16 + 8 + 0 + 0 + 0
= 216

STEP-4: Subtract the decimal value of Q… difference…in step


3
to get the decimal value of q.
216 – 64 = 152.

STEP-5: Convert the decimal value of q 304 into binary code.


2 152

2 76 0

2 38 0

2 19 0

2 9 1

2 4 1

2 2 0

1 0

Therefore, the EBCDIC code for letter “q” is 10011000.

e) The algorithm to convert uppercase ASCII characters to lower case is


1. Convert the uppercase ASCII code into decimal value.
2. Add 32 to the decimal value obtained in step 1.
3. Convert the decimal value obtained in step 2 to get the lower case
ASCII code.

* Yes, I believe the algorithm can be changed slightly to give the


algorithm for converting the EBCDIC letters to lower case.
The algorithm to convert uppercase EBCDIC characters to lowercase is:
1. Convert the uppercase EBCDIC code into decimal value.
2. Subtract 64 from the decimal value obtained in step 1.
3. Convert the decimal value obtained in step 2 to binary code to get the
lower case EBCDIC code.

f)
The best way to convert EBCDIC characters to ASCII characters is:

1. Convert the uppercase EBCDIC code into decimal value.


2. Subtract 128 from the decimal value obtained in step 1.
3. Convent the decimal value obtained in step 2 to binary code to get
the uppercase ASCII code.
4. Add 32 to the decimal value obtained in step 3.
5. Convent the decimal value obtained in step 4 to get the lowercase
ASCII code.
Question 8: (3 pts)
Decode the following ASCII message, assuming 7-bit ASCII characters and no parity:
1001010 1001111 1001000 1001110 0100000 1000100 1001111 1000101
Answer:

The message:
1001010 1001111 1001000 1001110 0100000 1000100 10011111000101

The message is in ASCII. Now, I find out the message:

First, I use the ASCII table, I can decode the message into
text.

The ASCII table using hexadecimal digits is:


00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F

00 NUL SOH STX ETX EOT ENQ ACK BEL BS TAB LF VT FF CR SO SI

10 DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN EM SUB ESC FS GS RS US

20 ! “ # $ % % ‘ ( ) * + , - . /

30 0 1 2 3 4 5 6 7 8 9 : ; < = > ?

40 @ A B C D E F G H I J K L M N O

50 P Q R S T U V W X Y Z [ \ ] ^ _

60 ` A B C d e f G h I j k l m n o

70 P Q R S t u v W x Y z { | } ~ DEL

For the string, each short bit string is divided into two subparts.
The subpart is then converted in hexadecimal notation that is
expressed in characters using the ASCII table.

The hexadecimal notations of each bit string are:

100 1010 = 4A
1001111 = 4F
100 1000 = 48
1001110 = 4E
010 0000 = 20
100 0100 = 44
1001111 = 4F
100 0101 = 45

I can use the ASCII table with hexadecimal notations


expressed in terms of these characters:

4A = J
4F = 0
48 = H
4E = N
20 = (space)
44 = D
4F = 0
45 = E

The message is "JOHN DOE"

Question 9: (4 pts)
Compute the Hamming distance of the following code:
0011010010111100
0000011110001111
0010010110101101
0001011010011110
Answer:

Compute the distance for all pairs of codes:

0000011110001111
0001011010011110 4
4
0010010110101101 8
4
8

0011010010111100 4

The distance of the code is the minimum distance. In this


instance this is 4. With this code, we can detect 3-bit
errors. Yet, it seems you can only correct 1-bit errors.

Question 10: (4 pts)

Suppose we are working with an error-correcting code that will allow all single-bit errors to be
corrected for memory words of length 12. We have already calculated that we need 5 check bits,
and the length of all code words will be 17. Code words are created according to the Hamming
Algorithm presented in the text. We now receive the following code word:
01100101001001001
Assuming even parity, is this a legal code word?
If not, according to our error-correcting code, where is the error?

Answer:

The goal here, based on the first question asked, is to check


whether the “received code word” is legal. I have to verify
that it satisfies the parity check equation for each of the 5
check bits.

The parity check equation for each check bit is the ‘XOR’
of the bits in the corresponding positions of all code words
that have a 1 in that position.

Now, I label the bits of the “received code word” from left
to right as b1, b2, ..., b17. I can construct the parity check
matrix by listing the bit positions for each of the 5 check
bits:

1. C1: 1 3 5 7 9 11 13 15 17
2. C2: 2 3 6 7 10 11 14 15
3. C3: 4 5 6 7 12 13 14 15
4. C4: 8 9 10 11 12 13 14 15
5. C5: 16 17

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Here, for each check bit, I apply the parity check


equation… via “XOR”-ing the corresponding bits of the
received code word:

C1: 0 XOR 1 XOR 1 XOR 0 XOR 0 XOR 1 XOR 0 XOR 0 XOR 1 = 0


C2: 1 XOR 1 XOR 0 XOR 0 XOR 1 XOR 1 XOR 0 XOR 0 = 0
C3: 0 XOR 1 XOR 0 XOR 0 XOR 1 XOR 0 XOR 1 XOR 0 = 1
C4: 0 XOR 0 XOR 1 XOR 0 XOR 0 XOR 1 XOR 0 XOR 1 = 1
C5: 0 XOR 1 = 1

Since the values of check bits C3 and C4 are not 0, the


received code word is not legal. To find the location of the
error, I need to look at the position corresponding to the
check bit with a value of 1 in the parity check equation.

Rationale:
Here, both check bits C3 and C4 have a value of 1, so there
are errors in both bit positions 12 and 13. I can correct the
error by flipping the bits in those positions to their
complement, resulting in the corrected code word:

01100101001111001

Here, I use the fact that the code can correct a single-bit error
to correct both errors simultaneously.

The received code word is not legal because of errors in


positions 12 and 13.

I can correct the errors by flipping the bits in those


positions to their complement.

The end

You might also like