Saint Joseph College of Sindangan Incorporated: Pr-Al 1
Saint Joseph College of Sindangan Incorporated: Pr-Al 1
Saint Joseph College of Sindangan Incorporated: Pr-Al 1
ASSESSMENT OF LEARNING 1
PR-AL 1
INTRODUCTION
One of the most important functions of a teacher is to assess the performance of the students. This is a
very complicated task because you will consider many activities such as the timing of the assessment process,
the format of the assessment tools and the duration of the assessment procedures.
After designing the assessment tools, package the test, administer the test to the students, check the test
papers, score and then record them. Return the test papers and then give feedback to the students regarding the
result of the test.
Assuming that you have already assembled the test, you write the instructional objectives, prepare the
table of specification, and write the test items that match with the instructional objectives, the next thing to do is
to package the test and reproduce it as discussed in the previous chapter.
after constructing the test items and putting them in order, the next step is to administer the test to the
students. The administration procedures greatly affect the performance of the students in the test. The test
administration does not simply mean giving the test questions to the students and collecting the test papers after
the given time. Below are the guidelines in administering the test before, during and after the test.
GUIDELINES BEFORE ADMINISTERING EXAMINATIONS
1. Do not give instructions or avoid talking while examination is going on to minimize interruptions
and distractions.
2. Avoid giving hints.
3. Monitor to check student progress and discourage cheating.
4. Give time warnings if students are not pacing their work appropriately.
5. Make a note of any questions students ask during the test so that items can be revised for future use.
6. Test papers must be collected uniformly to save time and to avoid test papers to be misplaced.
After the examination, the next activity that the teacher needs to do is to score the test papers, record the
result of the examination, return the test papers and last to discuss the items in the class so that you can analyze
and improve the items for future use.
1. Grade the papers (and add comments if you can); do test analysis (see the module on test analysis)
after scoring and before returning papers to students if at all possible. If it is impossible to do your
test analysis before returning the papers, be sure to do it at another time. It is important to do both
the evaluation of your students and the improvement of your test.
2. If you are recording grades or scores, record them in pencil in your class record before returning the
papers. If there are errors/ adjustments in grading, they (grades) are easier to change when recorded
in pencil.
3. Return papers in a timely manner.
4. Discuss test items with the students. If students have questions, agree to look over the papers again,
as well as the papers of others who have the same question. It is usually better not to agree to make
changes in grades on the spur of the moment while discussing the tests with the students but to give
yourself time to consider what action you want to take. The test analysis may have already alerted
you to a problem with a particular question that is common to several students, and you may already
have made a decision regarding that question (to disregard the question and reduce the highest
possible score accordingly, to give all students credit for that question, among others).
After administering and scoring the test, the teacher should also analyze the quality of each item in
the test. Through this you can identify the item that is good, item that needs improvement or items to be
removed from the test. But when do we consider that the test is good? How do we evaluate the quality of
each item in the test? Why is it necessary to evaluate each item in the test? Lewis Aiken (1997) an
author of psychological and educational measurement pointed out that a “postmortem” is just as
necessary in classroom assessment as it is in medicine.
In this section, we shall introduce the technique to help teachers determine the quality of a test
item known as item analysis. One of the purposes of item analysis is to improve the quality of the
assessment tools. Through this process, we can identify the item that is to be retained, revised or rejected
and also the content of the lesson that is mastered or not.
There are two kinds of item analysis, quantitative item analysis and qualitative item analysis
(Kubiszyn and Borich, 2007).
ITEM ANALYSIS
Item analysis is a process of examining the student’s response to individual item in the test. It
consists of different procedures for assessing the quality of the test items given to the students. Through
the use of item analysis, we can identify which of the given are good and defective test items. Good
items are to be retained and defective items are to be improved, to be revised or to be rejected.
1. Item analysis data provide a basis for efficient class discussion of the test results.
2. Item analysis data provide a basis for remedial work.
3. Item analysis data provide a basis for general improvement of classroom instruction.
4. Item analysis data provide a basis for increased skills in test construction.
5. Item analysis procedures provide a basis for constructing test bank.
There are three common types of quantitative item analysis which provide teachers with three
different types of information about individual test items. These are difficulty index, discrimination
index, and response options analysis.
1. Difficulty Index
It refers to the proportion of the number of students in the upper and lower groups who answered
an item correctly. The larger the proportion, the more students, who have learned the subject is
measured by the item. To compute the difficulty index of an item, use the formula:
n
D F= , where
N
D F = difficulty index
n = number of the students selecting item correctly in the upper group and
in the
lower group
LEVEL OF DIFFICULTY
To determine the level of difficulty of an item, find first the difficulty index using the formula
and identify the level of difficulty using the range given below.
LEVEL OF DIFFICULTY OF AN ITEM
The higher the value of the index of difficulty, the easier the item is. Hence, more students got
the correct answer and more students mastered the content measured by that item.
2. Discrimination Index
The power of the item to discriminate the students between those who scored high and
those who scored low in the overall test. In other words, it is the power of the item to
discriminate the students who know the lesson and those who do not know the lesson.
It also refers to the number of students in the upper group who got an item correctly
minus the number of students in the lower group who got an item correctly. Divide the difference
by either the number of the students in the upper group or number of students in the lower group
or get the higher number if they are not equal.
Discrimination index is the basis of measuring the validity of an item. This index can be
interpreted as an indication of the extent to which overall knowledge of the content area or
mastery of the skills is related to the response on an item.
TYPES OF DISCRIMINATION
1. Positive discrimination happens when more students in the upper group got the item
correctly than those students in the lower group.
2. Negative discrimination occurs when more students in the lower group got the item
correctly than the students in the upper group.
3. Zero discrimination happens when a number of students in the upper group and lower
group who answer the test correctly are equal, hence, the test item cannot distinguish the
students who performed in the overall test and the students whose performance are very
poor.
LEVEL OF DISCRIMINATION
Ebel and Frisbie (1986) as cited by Hetzel (1997) recommended the use of level of
Discrimination of an Item for easier interpretation.
C UG−C LG
D I= , Where
D
C UG = number of the students selecting the correct answer in the upper group.
C LG = number of the students selecting the correct answer in the lower group
Note: Consider the higher number in case the sizes in upper and lower group are
not
Equal.
Options A B C D E
Upper Group
Lower Group
4. Compute the value of the difficulty index and the discrimination index and also the analysis
of each response in the distracters.
5. Make an analysis for each item.
It is very important to determine whether the test item will be retained, revised or rejected. Using
discrimination Index, we can identify the non- performing question items; just always remember that they
seldom indicate what is the problem. Use the given checklist below:
YES NO
1. Does the key discriminate positively?
If the answers to questions 1 and 2 are both YES, retain the item.\
If the answers to questions 1 and 2 are either YES or NO, revise the item.
If the answers to questions 1 and 2 are both NO, eliminate or reject the item.
Aside from identifying the difficulty index and discrimination index, another way to evaluate the
performance of the entire test item is through the analysis of the response options. It is very
important to examine the performance of each option in a multiple- choice item. Through this,
you can determine whether the distracters or incorrect options are effective or attractive to those
who do not know the correct answer. The attractiveness of the incorrect options is determined
when more students in the lower group than in the upper group choose it. Analyzing the incorrect
options allows the teachers to improve the test items so that it can be used again in the future.
DISTRACTERS ANALYSIS
1. Distracter
Distracter is the term used for the incorrect options in the multiple choice type of
test while the correct answer represents the key. It is very important for the test writer to
know if the distracters are effective or good distracters. Using quantitative item analysis,
we can determine if the options are good or if the distracters are effective.
Item analysis can identify non- performing test items, but this item seldom
indicates the error or the problem in the given item. There are factors to be considered
why students failed to get the correct answer in the given question.
2. Miskeyed item
The test item is a potential miskey if there are more students from the upper group
who choose the incorrect options than the key.
3. Guessing item
Students from the upper group have equal spread of choices among the given
alternatives. Students from the upper group guess their answers because of the following
reasons.
a. The content of the test is not discussed in the class or in the text.
b. The test item is very difficult.
c. The question is trivial.
4. Ambiguous item
This happens when more students from the upper group choose equally an incorrect
option and the keyed answer.
Qualitative item analysis ( Zurawski, R. M.) is a process in which the teacher or expert carefully
proofreads the test before it is administered, to check if there are typographical errors, to avoid grammatical
clues that may lead to giving away the correct answer, and to ensure that the level of reading materials is
appropriate. These procedures can also include small group discussions on the quality of the examination and its
items, with examinees that have already took the test. According to Cohen, Swerdlik, and Smith (1992) as cited
by Zurawski, students who took the examination are asked to express verbally their experience in answering
each item in the examination. This procedure can help the teacher in determining whether the test takers
misunderstood a certain item, and it can help also in determining why they misunderstood a certain item.
Example 1: A class is composed of 40 students. Divide the group into two. Option B is the correct
answer. Based from the given data on the table, as a teacher, what would you do with the test item?
Options A B C D E
Upper Group 3 10 4 0 3
Lower Group 4 4 8 0 4
1. Compute the difficulty index.
n= 10+ 4 = 14
N= 40
n
DF =
N
D 14
F=¿ ¿
40
D F=0.35∨35 %
C LG =4
D = 20
D C UG−C LG
1=
D
10−4
=
20
6
=
20
= 0.30 or 30 %
Example 2. A class is composed of 50 students. Use 27% to get the upper and the lower groups.
Analyze the item given the following results. Option D is the correct answer. What will you do with the
test item?
OPTIONS A B C D E
UPPER GROUP 3 1 2 6 2
LOWER GROUP 5 0 4 4 1
C UG=6
C LG=4
D=4
D C UG−C LG
1=
D
D 6−4
1=
14
D 2
1=
14
D 1=0.14∨14 %
3. Make an analysis.
a. Only 36% of the examinees got the answer correctly, hence the item is difficult.
b. More students from the upper group got the answer correctly, hence it has a positive
discrimination.
c. Modify options B and E because more students from the upper group chose them compare
with the lower group, hence, they are not effective distracters because most of the students
who performed well in the overall examination selected them as their answers.
d. Retain options A and C because most of the students who did not perform well in the overall
examination selected them as the correct answers. Hence, options A and C are effective
distracters.
4. Conclusions: Revised the item by modifying options B and E.
Example 3. A class is composed of 50 students. Use 27 % to get the upper and the lower groups.
Analyze the item given the following results. Option E is the correct answer. What will you do with
the test item?
OPTIONS A B C D E
UPPER 2 3 2 2 5
GROUP
(27%)
LOWER 2 2 1 1 8
GROUP
(27%)
n = 5 + 8 = 13
N= 28
D n
F=
N
D 13
F=
28
D F=0.46∨46 %
C UG=5
C LG=8
D =4
D C UG−C LG
1=
D
D 5−8
1=
14
D −3
1=
14
D 1=−0.21∨−21 % ¿
¿
3. Make an analysis.
a. 46% of the students got the answer to test item correctly, hence, the test item is moderately
difficult.
b. More students from the lower group got the item correctly, therefore, it is a negative
discrimination. The discrimination index is – 21%.
c. No need to analyze the distracters because the item discriminates negatively.
d. Modify all the distracters because they are not effective. Most of the students in the upper
group chose the incorrect options. The options are effective if most of the students in the
lower group chose the incorrect options.
4. Conclusion: Reject the item because it has a negative discrimination index.
Example 4: Potential Miskeyed Item. Make an item analysis about the table below. What will
you do with the test that is a potential miskeyed item?
Options A B C D E
Upper Group 1 2 3 10 4
Lower Group 3 4 4 4 5
C UG=1
C LG=3
D=20
D C UG−C LG
1=
D
D 1−3
1=
20
D −2
1=
10
D 1=0.10∨−10 % ¿
¿
3. Make an analysis:
a. More students from the upper group choose option D than option A, even though option
A is supposedly the correct answer.
b. Most likely the teacher has written the wrong answer key.
c. The teacher checks and finds out that he or she did not miskey the answer that he or she
thought is the correct answer.
d. If the teacher miskeyed it, he/she must check and retally the scores of the students’ test
papers before giving them back.
e. If option A is really the correct answer, revise to weaken option D, distracters are not
supposed to draw more attention than the keyed answer.
f. Only 10% of the students got the answer to the test item correctly, hence, the test item is
very difficult.
g. More students from the lower group got the item correctly, therefore a negative
discrimination resulted. The discrimination index is – 10%.
h. No need to analyze the distracters because it is very difficult and discriminates
negatively.
4. Conclusion: Ambiguous Item. Below is the result of item analysis of a test with an
ambiguous test item. What can you say about the item? Are you going to retain, revise or
reject it?
OPTIONS A B C D E
UPPER GROUP 7 1 1 2 8
LOWER GROUP 6 2 3 3 6
C UG=8
C LG=6
D=20
D 2
1=
20
D 1=0.10∨10 %
3. Make an analysis.
a. Only 36 % of the students got the answer to the test item correctly, hence the
test item is difficult.
b. More students from the upper group got the item correctly, hence, it
discriminates positively. The discrimination index is 10%.
c. About equal numbers of top students went for option A and option E, this
implies that they could not tell which is the correct answer. The students do
not know the content of the test, thus a reteach is needed.
4. Conclusion: Revise the test item because it is ambiguous.
Name: Subject:
Course/Time: Score:
9. How do you determine whether the test item is ambiguous, miskey, or a guessing item?
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
___________________________________
10. What is the importance of analyzing the options in each item in a multiple- choice test?
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
___________________________________
11. When do we retain, revise, or reject an item?
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
___________________________________
12. When do we consider that a distracter is plausible?
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
___________________________________
13. When do we say that a distracter is effective and attractive?
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
___________________________________
14. Compute the difficulty and discrimination index of a test item number 6 administered to 40 students
in Statistics class. Twenty – seven percent (27%) of the students belong to the upper group and 27%
belongs to the lower group. There are 5 students from the upper group who got the item correctly
and 9 from the lower group got the item right.
a. Compute the difficulty index.
15. A 25- item multiple- choice test in Elementary Algebra with four options was recorded below. Listed
were a number of students in the lower and upper groups who answered A, B, C and D. the letter or
option with an asterisk is the correct answer.
Item 10 A B C D
i. What can you say about options A and C? Are they effective distracters? How about option B?
What will you do about option B?