Module 6
Module 6
Testing is neither assessment nor appraisal, but at the same time it may become a
means to getting information, data or evidences needed for assessment and appraisal. Testing
is one of the significant and most usable technique in any system of examination or
evaluation. It envisages the use of instruments or tools for gathering information or data. In
written examinations, question paper is one of the most potent tools employed for collecting
and obtaining information about pupils’ achievement.
Learning Theory
Learning theories are conceptual frameworks describing how information is absorbed,
processed and retained during learning. Cognitive, emotional, and environmental influences,
as well as prior experience, all play a part in how understanding, or a world view, is acquired
or changed and knowledge and skills retained.
Behaviorists look at learning as an aspect of conditioning and will advocate a system of
rewards and targets in education. Educators who embrace cognitive theory believe that the
definition of learning as a change in behavior is too narrow and prefer to study the learner
rather than their environment and in particular the complexities of human memory. Those
who advocate constructivism believe that a learner's ability to learn relies to a large extent on
what he already knows and understands, and the acquisition of knowledge should be an
individually tailored process of construction. Transformative learning theory focuses upon
the often- necessary change that is required in a learner's preconceptions and world view.
Behaviorism
Behaviorism is a philosophy of learning that only focuses on objectively observable
behaviors and discounts mental activities. Behavior theorists define learning as nothing more
than the acquisition of new behavior. Experiments by behaviorists identify conditioning as a
universal learning process. There are two different types of conditioning, each yielding a
different behavioral pattern: Classic conditioning occurs when a natural reflex responds to a
stimulus.
The most popular example is Pavlov's observation that dogs salivate when they eat or even
see food. Essentially, animals and people are biologically "wired" so that a certain stimulus
will produce a specific response. Behavioral or operant conditioning occurs when a response
to a stimulus is reinforced. Basically, operant conditioning is a simple feedback system: If a
reward or reinforcement follows the response to a stimulus, then the response becomes more
probable in the future. For example, leading behaviorist B.F. Skinner used reinforcement
techniques to teach pigeons to dance and bowl a ball in a mini-alley.
How Behaviorism impacts learning:
• Positive and negative reinforcement techniques of Behaviorism can be very effective.
• Teachers use Behaviorism when they reward or punish student behaviours.
Cognitivism
Jean Piaget authored a theory based on the idea that a developing child builds
cognitive structures, mental "maps", for understanding and responding to physical
experiences within their environment. Piaget proposed that a child's cognitive structure
increases in sophistication with development, moving from a few innate reflexes such as
crying and sucking to highly complex mental activities.
The four developmental stages of Piaget's model and the processes by which children
progress through them are: The child is not yet able to conceptualize abstractly and needs
concrete physical situations. As physical experience accumulates, the child starts to
conceptualize, creating logical structures that explain their physical experiences. Abstract
problem solving is possible at this stage. For example, arithmetic equations can be solved
with numbers, not just with objects. By this point, the child's cognitive structures are like
those of an adult and include conceptual reasoning. Piaget proposed that during all
development stages, the child experiences their environment using whatever mental maps
they have constructed. If the experience is a repeated one, it fits easily - or is assimilated -
into the child's cognitive structure so that they maintain mental "equilibrium". If the
experience is different or new, the child loses equilibrium, and alters their cognitive structure
to accommodate the new conditions. In this way, the child constructs increasingly complex
cognitive structures.
Constructivism
Constructivism is a philosophy of learning founded on the premise that, by reflecting
on our experiences we construct our own understanding of the world we live in. Each of us
generates our own "rules" and "mental models," which we use to make sense of our
experiences. Learning, therefore, is simply the process of adjusting our mental models to
accommodate new experiences.
INTRODUCTION
From this point of view evaluation is defined as a systematic process of determining the extent to
which the learners achieve instructional/training objectives. It may include either quantitative or
qualitative description of learner behavior plus value judgement concerning its worth. It is
imperative that we make judgements based on proper information (qualitative or quantitative)
through suitably designed tools and techniques for the purpose.
Role of Evaluation
Purposes of Evaluation
• know his/her strengths and weaknesses and direct his/her study efforts to
make up for gap or knowledge and understanding
• compare his/her progress with that of his/her peers and get motivated to
do better
• assess how effective the instructional methods and strategies used are
• detect students' learning difficulties and provide for remedy
• identify individual student differences and suitably adapt teaching
strategies
• grade students
It helps administrators to
• make any structural changes in the system such as providing more resources,
revision of curriculum etc., to improve the system
General Principles
Some general principles that provide direction to the evaluation process are:
• Evaluation is a systematic process to determine the extent to which objectives
are achieved. This means that formulating objectives in clear terms is an
important prerequisite, as that will spell out 'what to evaluate'.
• Evaluation procedures are selected in terms of the purposes to be served. The
question is not 'should this procedure be used?' but rather 'when this
procedure be used?' A particular procedure is suitable for certain purposes
and not appropriate for others.
• A variety of procedures are needed for evaluation. Tests (different types), self-
report techniques and observation are some of the procedures available,
Appropriate procedures are to be used depending on the nature of objectives
(cognitive, psychomotor; and affective) for ensuing comprehensive
evaluation.
• Knowledge of limitations as well as strengths of different evaluation
procedures is needed for their proper use- A teacher/trainer should develop
skills in minimizing errors in evaluation by being able to design and use
different procedures appropriately
• Evaluation is a means to an end and not an end in itself. Evaluation has to be
looked upon as a process of obtaining reliable information upon which to base
educational decisions (instructional, guidance or administrative). It is not the
end of the teaching learning process.
The Languages of Assessment
Assessment: It is systematic procedure for collecting information that can be used to make
inferences about the characteristics of people or object. It is a process of gathering
information to monitor progress and make educational decisions if necessary. As noted in my
definition of test, an assessment may include a test, but also includes methods such as
observations, interviews, behavior monitoring, etc.
Evaluation: Basically, evaluation is the process of making judgments based on criteria and
evidence. Also, Procedures used to determine whether the subject (i.e. student) meets a
preset criteria, such as qualifying for special education services. This uses assessment
(remember that an assessment may be a test) to make a determination of qualification in
accordance with a predetermined criterion.
Different forms of Assessment
Assessment frames learning, creates learning activity and orients all aspects of learning
behavior. Tests and other assessment procedures can also be classified in terms of their
functional role in classroom instruction. The functional role explains the sequence of
assessment procedures are likely to be used in the classroom. This kind of sequencing and
categorization is continuing today. According to David Miller, the classification are
Placement assessment: To determine student performance at the beginning of instruction
Formative assessment: To monitor learning progress during instruction
Diagnostic assessment: To diagnose learning difficulties during instruction
Summative assessment: To assess achievement at the end of instruction
Placement Assessment
Placement assessment is concerned with the student's entry performance and typically focuses
on questions such as (a) Does the student possess the knowledge and skills needed to begin the
planned instruction? For example, beginning algebra, student should have a sufficient
command of essential mathematics concepts (b) To what extent has the student already
developed the understanding and skills that are the goals of the planned instruction? Sufficient
levels of comprehension and proficiencies might indicate the desirability of skipping certain
units or of being placed in a more advanced course. (c) To what extent do the student's interests,
work habits, and personality characteristics indicate that one mode of instruction might be
better than another (e.g., group instruction versus independent study)? The goal of placement
assessment is to determine for each student the position in the instructional sequence and the
mode of instruction that is most beneficial.
Formative Assessment
Assessment for learning is a formative assessment. Formative assessment is used to monitor
learning progress during instruction. Its purpose is to provide continuous feedback to both
students and teachers concerning learning successes and failures. The wide variety of
information that teachers collect about students’ learning processes provides the basis for
determining what they need to do next to move student learning forward. It provides the basis
for providing descriptive feedback for students and deciding on groupings, instructional
strategies, and resources The feedback to students provides reinforcement of successful
learning and identifies the specific learning errors and misconceptions that need correction.
Formative assessment depends heavily on specially prepared tests and assessments for each
segment of instruction that is unitwise or chapterwise. Tests and other types of assessment tasks
used for formative assessment are most frequently teacher made, but customized tests made
available by publishers of textbooks and other instructional materials also can serve th.is
function. Observational techniques are, of course, also useful in monitoring student progress
and identifying learning errors. Because formative assessment is directed toward. improving
learning and instruction, the results are typically not used for assigning course grades.
Diagnostic Assessment
Diagnostic assessment is a highly specialized procedure. It is concerned with the
persistent or recurring learning difficulties that are left unresolved by the standard corrective
prescriptions of formative assessment. If a student continues to experience failure in reading,
mathematics, or other subjects despite the use of prescribed alternative methods of instruction,
then a more detailed diagnosis is indicated. To use a medical analogy, formative assessment
provides first-aid treatment for simple learning problems, and diagnostic assessment searches
for the underlying causes of those problems that do not respond to first-aid treatment. Thus,
diagnostic assessment is much more comprehensive and detailed. It involves the use of
specially prepared diagnostic tests as well as various observational techniques. Serious learning
disabilities also are likely to require the services of educational, counsellors, and medical
specialists. The aim of diagnostic assessment is to determine the causes of persistent learning
problems and to formulate a plan for remedial action.
Summative Assessment
The last kind of assessment is called summative assessment and it is also called as
assessment of learning. Summative assessment typically comes at the end of a course of
instruction. It is designed to determine the extent to which the instructional goals have been
achieved and is used primarily for assigning course grades or for certifying student mastery of
the intended learning outcomes. The techniques used in summative assessment are determined
by the instructional goals, but they typically include teacher made achievement tests, ratings
on various types of performance, and assessments of products. These various sources of
information about student achievement may be systematically collected into a portfolio that
may be used to summarize or showcase the student’s accomplishments and progress. Although
the main purpose of summative assessment is grading or the certification of student
achievement, it also provides information for judging the appropriateness of the course
objectives and the effectiveness of the instruction.
So, sequencing of assessment is more sensible in Teaching – Learning process. Among the all
forms of assessment formative assessment is more powerful assessment for the improvement
teaching – learning process. In the formative assessment questioning to the students is an art.
Graduate Attributes leads to programme outcome
Every student become a graduate at end of their programme and each graduate should
possess appropriate graduate attributes. The term Graduate Attribute by itself has been
defined differently by educationalists.
The most popular one is graduate attributes as the qualities, skills and understandings an
institution community agrees its students would desirably develop during their time at the
institution. Another expert gave an explanation is the term Graduate Attribute GA is abstract
and demonstrates broad concepts for employability, lifelong learning, preparation for an
uncertain future.
National Board of Accreditation, Govt of India listed the graduate attributes for different
programmes. The details are available in http://www.nbaind.org. The following list is
graduate attributes leads to programme outcome for the UG Engineering programme.
Graduate attributes can be assessed indirectly in a programme. The assessment starts from the
micro level as course outcome. Achievements of all the course outcomes leads to programme
outcome (PO) and programme specific outcomes (PSO). The achievement of PO and PSO
indirectly linking with the graduate attributes.
Questioning Skill
Questioning skill is an important skill need to be developed among the faculty
members. Questioning plays a major role in all forms assessment particularly placement,
formative and diagnostic. Much classroom practice can be described as assessment activities.
Teachers set tasks and activities and pose questions to learners. Learners respond to the tasks,
activities and questions, and the teachers make judgements on the learners’ knowledge,
`understanding and skills acquisition as evidenced in the learners’ responses. These judgements
on learners’ performance happen quite naturally in the course of any teaching and learning
session and require two-way dialogue, decision-making and communication of the assessment
decision in the form of quality feedback to the learner on their performance. Depending on how
successfully these classroom practices have been undertaken, learning will have taken place in
varying degrees from learner to learner.
Testing the learning is an important part of classroom practice, and questioning is one of the
most common methods of checking learner understanding. Questioning is something teachers
do naturally as part of their daily routine, but developing the skills associated with questioning
techniques presents many challenges for teachers and it is something that is developed over
time. Teachers need to review what is to be learnt in any one teaching and learning session and
plan for the inclusion of questioning accordingly. When to pose open and closed questions,
teacher must know, how to develop a question distribution strategy and when to use questions
to check learners’ knowledge.
So,
• Pose the question to the whole group.
• Pause – allowing all learners to think of the answer Pose, Pause, Pounce (PPP)
• Name a learner to answer.
• Listen to the answer.
• Reward correct answers.
• Incorrect answers should not be ridiculed either by the teacher or the remainder of the
group of learners.
• Spread the questions around the class so that all can participate.
The distribution of the question is again very important.
• If teachers work around the class in an obvious systematic order, those who have
answered tend to relax a little, and sometimes ‘switch off’.
• Use a technique which is not obvious.
• Be conscious of the tendency to choose the same learners when asking questions.
• Most teachers tend to concentrate their attention on those learners, so deliberately pay
attention to those normally omitted.
Different Questioning types in the classroom
There are many advantages to closed questions. They’re quick and easy to
respond to and generally reduce confusion. They’re also particularly useful for
challenging pupils’ memory and recalling facts.
Useful for: critical or creative discussion, finding out more information about a
concept or lesson
Probing questions
These questions are useful for gaining clarification and encouraging others to
tell you more information about a subject. Probing questions are usually a
series of questions that dig deeper and provide a fuller picture.
When a teacher wishes to start the new lesson, the teacher start with
questions and at one point the student not able to tell the answers then the
teacher introduce the new lesson. This kind of question and bring the students
at one point and introducing the concepts / lesson motivate the student to
listen the class.
Leading questions
It’s important to use leading questions carefully; they can be seen as an unfair
way of getting the answer you want.
Loaded questions
Loaded questions are seemingly straightforward, closed questions — with a
twist: they contain an assumption about the respondent. They’re popularly used
by examiner during viva-voce of laboratory or project to trick their interviewee
or student to get the fundamental concept of the lab or project they would
otherwise be unwilling to disclose.
For example, the question: ‘have you stopped copying the answers from the
nearby students?’ assumes the respondent copied more than once. Whether
the student answers yes or no, the student will admit to having copied the
answer at some point.
Of course, the preferred response would be: ‘I have never copied answers in my
examination’ But it’s not always easy to spot the trap. These questions are quite
rightly seen as manipulative.
Useful for: discovering facts about someone who would otherwise be reluctant
to offer up the information
Funnel questions
When a faculty member wish to start with a generalized discussion and get the
concepts in detail then funnel questions are more suitable. This funnel questions
very much necessary while refresh the concepts before exams or practical.
Funnel questions can also be used to bring the students in the relaxed attention
mode: asking students to go into detail about their difficulties in learning or
listening distracts them from their anxiety and gives the information that the
faculty member need to provide them a solution, which in turn calms them
down and makes them think something positive is being done to help them.
To start with preparative assessment, the recall questions are very much useful
for the faculty members. Recall questions require the student to remember the
lesson which taught in the earlier class. For example, the faculty members wish
to start the class, before the faculty member wish to connect the previous class
concept and make the students to remember the earlier class, then recall
questions are suitable to start the session. This kind of question is also used to
the student who is not listening the class and bring back the attention with
simple recall questions.
Rhetorical questions
The rhetorical questions are make the students to always remember the
concept or formulae or statement. Rhetorical questions are asked to keep the
students engaged in the class by remembering the lesson. It also helps student
to think, be creative and come up with ideas.
Criticism 1: Tests Create Anxiety. There is no doubt that anxiety increases during testing.
For most students, it motivates them to perform better. For a few, test anxiety may be so great
that it interferes with test performance. These typically are students who are generally
anxious, and the test simply adds to their already high level of anxiety. A number of steps can
be taken to reduce test anxiety, such as thoroughly preparing for the test, taking practice
exercises, and using liberal time limits. Fortunately, many test publishers in recent years have
provided practice tests and shifted from speed tests to power tests. This should help, but it is
still necessa1y to observe students carefully during testing and to discount the scores of
overly anxious students.
Criticism 2: Tests Categorize and Label Students. Categorizing and labeling individuals can
be a serious problem, particularly when those labels are used as an excuse for poor student
achieve men t rather than a means of providing the extra-services and help to ensure better
achievement. It is all too easy to place individuals in pigeonholes and apply labels that
determine, at-least in part, how they are viewed and treated. Classifying students in terms of
levels of mental ability has probably caused the greatest concern in education. When students
are classified as mentally retarded, for example, it influences how teachers and peers view the
m, how they view themselves, and the kind of institutions programs they receive. When
students are mislabeled as mentally retarded, as has been the case with some racial and ethnic
minorities, the problem is compounded. At least some of the support for mainstreaming
handicapped students has come from the desire to avoid the categorizing and labeling that
accompanies special education classes. Classifying students into various types of learning
groups can more efficiently use the teacher 's time and the school's resources. However, when
grouping, teachers must consider that tests measure only a limited sample of a student 's
abilities and that students are continuously changing and developing. By keeping the
groupings tentative and flexible and regrouping for different subjects (e.g., science and math),
teachers can avoid most of the undesirable features of grouping. It is when the categories are
viewed as rigid and permanent that labeling becomes a serious problem. In such cases, it is
not the test that should be blamed but the user of the test.
Criticism 3: Tests Damage Students' Self-Concepts. This is a concern that requires the
attention of teachers, counselors, and other users of tests. The improper use of tests may
indeed contribute to distorted self-concepts. The stereotyping of students is one misuse of
tests that is likely to have an undesirable influence on a student's self-concept. Another is the
inadequate interpretation of tst scores that may cause students to make unwarranted
generalizations from the results. It is certainly discouraging to receive low scores on tests,
and it is easy to see how students might develop a general sense of failure unless the results
are properly interpreted. Low-scoring students need to be made aware that aptitudeand
achievement tests are limited measures and that the results can change. In addition, the
possibility of over generalizing from low test scores will be lessened if the student's positive
accomplishments and characteristics are mentioned during the interpretation. When properly
interpreted and used, tests can help students develop a realistic understanding of their
strengths and weaknesses and thereby contribute to imp roved learning and a positive self-
image.
Criticism 4: Tests create Self-fulfilling prophecies. This criticism has been directed primarily
toward intelligence or scholastic aptitude tests. The argument is that test scorescreate tead1er
expectations concerning the achievement of individual students; the tead1er then teaches in
accordance with those expectations, and the students respond by achieving to their expected
level- a self-fulfilling prophecy. Thus, those who are expected to achieve more do achieve
more, and those who are expected to achieve less do achieve less. The belief that teacher
expectations enhance or hinder a student's achievement is widely held, and the role of testing
in creating these expectations is certainly worthy of further research.
In summary, there is some merit in the va1ious criticisms concerning the possible
undesirable effects of tests on students; but more often than not, these criticisms should be
directed at the users of the tests rather than the tests themselves. The same persons who
misuse test results are likely to misuse alternative types of information that are even less
accurate and objective. Thus, the solution is not to stop using tests but to start using tests and
other data. sources of information more effectively.
Two-Dimensional Approach
Teachers traditionally have struggled with issues and concerns pertaining to education,
teaching, and learning. Here are four of the most important organizing questions:
• What is important for students to learn in the limited school and class room time
available? (the learning question)
• How does one plan and deliver instruction that will result in high levels of learning for
large numbers of students? (the instruction question)
• How does one select or design assessment instruments and procedures that provide
accurate information about how well students are learning? (the assessment question)
• How does one ensure that objectives, instruction, and assessment are consistent with
one another? (the alignment question)
Once an objective has been placed into a particular cell of the Taxonomy Table shown in fig 2,
we can begin systematically to attack the problem of helping students achieve that objective
through learning question can be answered. Different types of objectives (that is, objectives in
different cells of the table) require different approaches to deliver the instruction and
assessment. Similar types of objectives (that is, objectives in the same cells ofthe table) likely
involve similar approaches to assessment. For example, To assess students' learning with
respect to the number systems, we could provide each student with a list of, say, six numbers,
all of which are either rational or irrational numbers, and ask the student to answer questions
about the list of numbers. The numbers selected should be as different as possible from the
numbers in the textbook or discussed during class. Three example questions follow:
• To what number system, rational or irrational, do all of these numbers belong?
• How do you know that it is the type of number system you say it is?
• How could you change each number so it is an example of the other number system?
That is, if it is an irrational number, change it to a rational number, and if it is a rational
number, change it to an irrational number.
So, it is necessary to understand the two-dimensional framework for mapping two-dimensional
approach of preparing instructional objectives and assessment procedure and complexity. The
framework can be represented in a two-dimensional table that we call the Taxonomy Table.
The rows and columns of the table contain carefully delineated and defined categories of
knowledge and cognitive processes, respectively. The cells of the table are where the
knowledge and cognitive process dimensions intersect. Objectives, either explicitly or
implicitly, include both knowledge and cognitive processes that can be classified in the
Taxonomy framework. Therefore, objectives can be placed in the cells of the table. It should
be possible to place any educational objective that has a cognitive emphasis in one or more
cells of the table.
Categories of the knowledge dimension
After considering the various designations of knowledge types, especially
developments in cognitive psychology that have taken place since the original framework's
(Bloom’s Taxonomy) creation, it settled on four general types of knowledge: Factual,
Conceptual, Procedural, and Metacognitive. Table-1 summarizes these four major types of
knowledge and their associated subtypes.
Factual knowledge is knowledge of discrete, isolated content elements "bits of information".
It includes knowledge of terminology and knowledge of specific details and elements. In
contrast, Conceptual knowledge is knowledge of "more complex, organized knowledge forms".
It includes knowledge of classifications and categories, principles and generalizations, and
theories, models, and structures. Procedural knowledge is "knowledge of how to do something''
(p. 52). It includes knowledge of skills and algorithms, techniques and methods, as well as
knowledge of the criteria used to determine and/or justify "when to do what" within specific
domains and disciplines. Finally, Metacognitive knowledge is "knowledge about cognition in
general as well as awareness of and knowledge about one's own cognition". It encompasses
strategic knowledge; knowledge about cognitive tasks, including contextual and conditional
knowledge; and self-knowledge. Of course, certain aspects of metacognitive knowledge are
not the same as knowledge that is defined consensually by experts.
Factual Knowledge
Factual knowledge encompasses the basic elements that experts use in communicating
about their academic discipline, understanding it, and organizing it systematically. These
elements are usually serviceable to people who work in the discipline in the very form in which
they are presented; they need little or no alteration from one use or application to another.
Factual knowledge contains the basic elements students must know if they are to be acquainted
with the discipline or to solve any of the problems in it. The elements are usually symbols
associated with some concrete referents, or "strings of symbols" that convey important
information. For the most part, Factual knowledge exists at a relatively low level of abstraction.
Because there is a tremendous wealth of these basic elements, it is almost inconceivable that a
student could learn all of them relevant to a particular subject matter. As our knowledge
increases in the Engineering and Technology, sciences, and mathematics, even experts in these
fields have difficulty keeping up with all the new elements. Consequently, some selection for
educational purposes is almost always required. For classification purposes, Factual knowledge
may be distinguished from Conceptual knowledge by virtue of its very specificity; that is,
Factual knowledge can be isolated as elements or bits of information that are believed to have
some value in and of themselves. The two subtypes of Factual knowledge are knowledge of
terminology (Aa) and knowledge of specific details and elements (Ab).
Knowledge of terminology
Knowledge of terminology includes knowledge of specific verbal and nonverbal la
bels and symbols (e.g., words, numerals, signs, pictures). Each subject matter contains a large
number of labels and symbols, both verbal and nonverbal, that have particular referents. They
are the basic language of the discipline the shorthand used by experts to express what they
know. In any attempt by experts to communicate with others about phenomena within their
discipline, they find it necessary to use the special labels and symbols they have devised. In
many cases it is impossible for experts to discuss problems in their discipline without making
use of essential terms. Quite literally, they are unable to even think about many of the
phenomena in the discipline unless they use these labels and symbols. The novice learner must
be cognizant of these labels and symbols and learn the generally accepted referents that are
attached to them. As the expert must communicate with these terms, so must those learning the
discipline have a knowledge of the terms and their referents as they attempt to comprehend or
think about the phenomena of the discipline. Here, to a greater extent than in any other category
of knowledge, experts find their own labels and symbols so useful and precise that they are
likely to want the learner to know more than the learner really needs to know or can learn. This
may be especially true in the sciences, where attempts are made to use labels and symbols with
great precision. Scientists find it difficult to express ideas or discuss particular phenomena with
the use of other symbols or with "popular" or "folk knowledge" terms more familiar to a lay
population.
Strategic knowledge
Strategic knowledge is knowledge of the general strategies for learning, thinking, and
problem solving. The strategies in this subtype can be used across many different tasks and
subject matters, rather than being most useful for one particular type of task in one specific
subject area (e.g., solving a quadratic equation or applying Ohm's law).
This subtype, Strategic knowledge includes knowledge of the variety of strategies that stu
dents might use to memorize material, extract meaning from text, or comprehend what they
hear in classrooms or read in books and other course materials. The large number of different
learning strategies can be grouped into three general categories: rehearsal, elaboration, and
organizational (Weinstein and Mayer, 1986). Rehearsal strategies involve repeating words or
terms to be recalled over and over to oneself; they are generally not the most effective strategies
for deeper levels of learning and comprehension. In contrast, elaboration strategies include the
use of various mnemonics for memory tasks as well as techniques such as summarizing,
paraphrasing, and selecting the main idea from texts. Elaboration strategies foster deeper
processing of the material to be learned and result in better comprehension and learning than
do rehearsal strategies. Organizational strategies include various forms of outlining, drawing
"cognitive maps" or concept mapping, and note taking; students transform the material from
one form to another. Organizational strategies usually result in better comprehension and
learning than do rehearsal strategies.
In addition to these general learning strategies, students can have knowledge of various
meta-cognitive strategies that are useful in planning, monitoring, and regulating their cognition.
Students can eventually use these strategies to plan their cognition (e.g., set sub-goals), monitor
their cognition (e.g., ask themselves questions as they read a piece of text, check their answer
to a math problem), and regulate their cognition (e.g., re-read something they don't understand,
go back and "repair'' their calculating mistake in a math problem). Again, in this category we
refer to students' knowledge of these various strategies, not their actual use. Finally, this
subtype, Strategic knowledge includes general strategies for problem solving and thinking
(Baron, 1994; Nickerson, Perkins, and Smith, 1985; Sternberg, 1985). These strategies
represent the various general heuristics students can use to solve problems, particularly ill-
defined problems that have no definitive solution method. Examples of heuristics are means-
ends analysis and working backward from the desired goal state. In addition to problem-solving
strategies, there are general strategies for deductive and inductive thinking, including
evaluating the validity of different logical statements, avoiding circularity in arguments,
making appropriate inferences from different sources of data, and drawing on appropriate
samples to make inferences (i.e., avoiding the availability heuristic-making decisions from
convenient instead of representative symbols).
Examples of knowledge about cognitive tasks, including contextual and conditional knowledge
• Knowledge that recall tasks (i.e., short-answer items) generally make more demands on
the individual's memory system than recognition tasks (i.e., multiple-choice items)
• Knowledge that a primary source book may be more difficult to under
• stand than a general textbook or popular book
• Knowledge that a simple memorization task (e.g., remembering a phone number) may
require only rehearsal
• Knowledge that elaboration strategies like summarizing and paraphrasing can result in
deeper levels of comprehension
• Knowledge that general problem-solving heuristics may be most useful when the
individual lacks relevant subject- or task-specific knowledge or in the absence of
specific Procedural knowledge
• Knowledge of the local and general social, conventional, and cultural norms for how,
when, and why to use different strategies
Self knowledge
Along with knowledge of different strategies and cognitive tasks, Flavell (1979)
proposed that self-knowledge was an important component of meta-cognition. In his model
self-knowledge includes knowledge of one's strengths and weaknesses in relation to cognition
and learning. For example, students who know they generally do better on multiple-choice tests
than on essay tests have some self-knowledge about their test-taking skills. This knowledge
may be useful to students as they study for the two different types of tests. In addition, one hall
mark of experts is that they know when they do not know something and they then have some
general strategies for finding the needed and appropriate in formation. Self-awareness of the
breadth and depth of one's own knowledge base is an important aspect of self-knowledge.
Finally, students need to be aware of the different types of general strategies they are likely to
rely on in different situations. An awareness that one tends to over-rely on a particular strategy,
when there may be other more adaptive strategies for the task, could lead to a change in strategy
use.
In addition to knowledge of one's general cognition, individuals have beliefs about their
motivation. Motivation is a complicated and confusing area, with many models and theories
available. Although motivational beliefs are usually not considered in cognitive models, a fairly
substantial body of literature is emerging that shows important links between students'
motivational beliefs and their cognition and learning.
A consensus has emerged, however, around general social cognitive models of motivation that
propose three sets of motivational beliefs (Pintrich and Schunk, 1996). Because these beliefs
are social cognitive in nature, they fit into a taxonomy of knowledge. The first set consists of
self-efficacy beliefs, that is, students' judgments of their capability to accomplish a specific
task. The second set includes beliefs about the goals or reasons students have for pursuing a
specific task (e.g., learning vs. getting a good grade}. The third set contains value and interest
beliefs, which represent students' perceptions of their personal interest (liking) for a task as
well as their judgments of how important and useful the task is to them. Just as students need
to develop self-knowledge and awareness about their own knowledge and cognition, they also
need to develop self knowledge and awareness about their own motivation. Again, awareness
of these different motivational beliefs may enable learners to monitor and regulate their
behaviour in learning situations in a more adaptive manner.
Self-knowledge is an important aspect of Meta-cognitive knowledge, but the accuracy
of self-knowledge seems to be most crucial for learning. We are not advocating that teachers
try to boost students' "self-esteem" (a completely different construct from self-knowledge) by
providing students with positive but false, inaccurate, and misleading feedback about their
academic strengths and weaknesses. It is much more important for students to have accurate
perceptions and judgments of their knowledge base and expertise than to have 'inflated and
inaccurate self-knowledge. If students are not aware they do not know some aspect of Factual
knowledge or Conceptual knowledge or that they don't know how to do something (Procedural
knowledge), it is unlikely they will make any effort to learn the new material. A hallmark of
experts is that they know what they know and what they do not know, and they do not have
inflated or false impressions of their actual knowledge and abilities. Accordingly, we
emphasize the need for teachers to help students make accurate assessments of their self-
knowledge and not attempt to inflate students' academic self-esteem.
Examples of self-knowledge
• Knowledge that one is knowledgeable in some areas but not in others
• Knowledge that one tends to rely on one type of "cognitive tool" (strategy)
• in certain situations
• Knowledge of one's capabilities to perform a particular task that are accurate, not
inflated (e.g., overconfident)
• Knowledge of one's goals for performing a task
• Knowledge of one's judgments about the relative utility value of a task
CATEGORIES OF THE COGNITIVE PROCESS DIMENSION
Let us define the cognitive processes within each of the six categories in detail, making
comparisons with other cognitive processes, where appropriate. In addition, sample
educational objectives and assessments in various subject areas as well as alternative versions
of assessment tasks. Each illustrative objective in the following material should be read as
though preceded by the phrase "The student is able to ... " or "The student learns to...."
Remember
When the objective of instruction is to promote retention of the presented material in
much the same form as it was taught, the relevant process category is Remember.
Remembering involves retrieving relevant knowledge from long term memory. The two
associated cognitive processes are recognizing and recalling. The relevant knowledge may be
Factual, Conceptual, Procedural, or Meta-cognitive, or some combination of these. To assess
student learning in the simplest process category, the student is given a recognition or recall
task under conditions very similar to those in which he or she learned the material. Little, if
any, extension beyond those conditions is expected. If, for example, a student learned the
English equivalents of 20 Spanish words, then a test of remembering could involve requesting
the student to match the Spanish words in one list with their English equivalents in a second
list (i.e., recognize) or to write the corresponding English word next to each of the Spanish
words presented in the list (i.e., recall).
Remembering knowledge is essential for meaningful learning and problem solving as
that knowledge is used in more complex tasks. For example, knowledge of the correct spelling
of common English words appropriate to a given grade level is necessary if the student is to
master writing an essay. Where teachers concentrate solely on rote learning, teaching and
assessing focus solely on remembering elements or fragments of knowledge, often in isolation
from their context. When teachers focus on meaningful learning, however, remembering
knowledge is integrated within the larger task of constructing new
knowledge or solving new problems.
Recognizing
Recognizing involves retrieving relevant knowledge from long-term memory in order
to compare it with presented information. In recognizing, the student searches long-term
memory for a piece of information that is identical or extremely similar to the presented
information (as represented in working memory). When presented with new information, the
student determines whether that information corresponds to previously learned knowledge,
searching for a match. An alternative term for recognizing is identifying.
Sample objectives and corresponding assessments: In general studies, an objective could be for
students to recognize the correct dates of important events in Indian history. A corresponding
test item is: "True or false: The Declaration of Independence was on August 15, 1947." In
literature, an objective could be to recognize authors of Indian literary works. A corresponding
assessment is a matching test that contains a list of ten authors and a list of slightly more than
ten novels. In mathematics, an objective could be to recognize the numbers of sides in basic
geometric shapes. A corresponding assessment is a multiple choice test with items such as the
following: "How many sides does a pentagon have? (a) four, (b) five, (c) six, (d) seven."
Recalling
Recalling involves retrieving relevant knowledge from long-term memory when given
a prompt to do so. The prompt is often a question. In recalling, a student searches long-term
memory for a piece of information and brings that piece of information to working memory
where it can be processed. An alternative term for recalling is retrieving.
Assessment formats: Assessment tasks for recalling can vary in the number and quality of cues
that students are provided. With low cueing, the student is not given any hints or related
information (such as "What is a meter?"). With high cueing, the student is given several hints
(such as "In the metric system, a meter is a measure of .").
Assessment tasks for recalling can also vary in the amount of embedding, or the extent to which
the items are placed within a larger meaningful context. With low embedding, the recall task
is presented as a single, isolated event, as in the preceding examples. With high embedding,
the recall task is included within the context of a larger problem, such as asking a student to
recall the formula for the area of a circle when solving a word problem that requires that
formula.
Understand
As we indicated, when the primary goal of instruction is to promote retention, the focus
is on objectives that emphasize Remember. When the goal of instruction is to promote transfer,
however, the focus shifts to the other five cognitive processes, Understand through Create. Of
these, arguably the largest category of transfer-based educational objectives emphasized in
institutions is Understand. Students are said to Understand when they are able to construct
meaning from instructional messages, including oral, written, and graphic communications,
however they are presented to students: during lectures, in books, or on computer monitors.
Examples of potential instructional messages include an in-class physics demonstration, a
geological formation seen on a field trip, a computer simulation of a trip through an art
museum, and a musical work played by an orchestra, as well as numerous verbal, pictorial, and
symbolic representations on paper.
Students understand when they build connections between the "new" knowledge to be
gained and their prior knowledge. More specifically, the incoming knowledge is integrated
with existing schemas and cognitive frameworks. Since concepts are the building blocks for
these schemas and frameworks, Conceptual knowledge provides a basis for understanding.
Cognitive processes in the category of Understand include interpreting, exemplifying,
classifying, summarizing, inferring, comparing, and explaining.
Interpreting
Interpreting occurs when a student is able to convert information from one
representational form to another. Interpreting may involve converting words to words (e.g.,
paraphrasing), pictures to words, words to pictures, numbers to words, words to numbers,
musical notes to tones, and the like. Alternative terms are translating, paraphrasing,
representing, and clarifying.
Sample objectives and corresponding assessments: In interpreting, when given information in
one form of representation, a student is able to change it into another form. For example, an
objective could be to learn to paraphrase important functions of compiler phases. A
corresponding assessment asks a student to check the email grammar. In science, an objective
could be to learn to draw pictorial representations of various natural phenomena. A
corresponding assessment item asks a student to draw a series of diagrams illustrating
photosynthesis. In mathematics, a sample objective could be to learn to translate number
sentences expressed in words into algebraic equations expressed in symbols. A corresponding
assessment item asks a student to write an equation that corresponds to the statement "There
are twice as many boys as girls in this class."
Assessment formats: Appropriate test item formats include both constructed response (i.e.,
supply an answer) and selected response (i.e., choose an answer). Information is
presented in one form, and students are asked either to construct or to select the same
information in a different form. For example, a constructed response task is: "Write an
equation that corresponds to the following statement, using T for total cost and P for
number of bundles. The total cost of mailing a package is Rs. 2.00 for the first bundle
plus Rs.1.50 for each additional bundle." A selection version of this task is: "Which
equation corresponds to the following statement, where T stands for total cost and P for
number of bundles?
The total cost of mailing a package is Rs. 2.00 for the first bundle plus Rs.1.50 for each
additional bundle (a) T = Rs.3.50 + B, (b) T = Rs. 2.00 + Rs. 1.50(B), (c) T = Rs. 2.00 +
Rs.1.50(B-1)."
To increase the probability that interpreting rather than remembering is being assessed,
the information included in the assessment task must be new. "New" here means that
students did not encounter it during instruction. Unless this rule is observed, we cannot
ensure that interpreting rather than remembering is being assessed. If the assessment task
is identical to a task or example used during instruction, we are probably assessing
remembering, despite our efforts to the contrary. Although we will not repeat this point
from here on, it applies to each of the process categories and cognitive processes beyond
Remember. If assessment tasks are to tap higher-order cognitive processes, they must
require that students cannot answer them correctly by relying on memory alone.
Exemplifying
Exemplifying occurs when a student gives a specific example or instance of a general
concept or principle. Exemplifying involves identifying the defining features of the general
concept or principle (e.g., an isosceles triangle must have two equal sides) and using these
features to select or construct a specific instance (e.g., being able to select which of three
presented triangles is an isosceles triangle). Alternative terms are illustrating and instantiating.
Assessment formats: Exemplifying tasks can involve the constructed response format in which
the student must create an example or the selected response format in which the student must
select an example from a given set. The science example, "Locate an inorganic compound and
tell why it is inorganic," requires a constructed response. In contrast, the item "Which of these
is an inorganic compound? (a) iron, (b) protein, (c) blood, (d) leaf mold" requires a selected
response.
Classify
Classifying occurs when a student recognizes that something (e.g., a particular instance
or example) belongs to a certain category (e.g., concept or principle). Classifying involve s
detecting relevant features or patterns that "fit" both the specific instance and the concept or
principle. Classifying is a complementary process to exemplifying. Whereas exemplifying
begins with a general concept or principle and requires the student to find a specific instance
or example, classifying begins with a specific instance or example and requires the student to
find a general concept or principle. Alternative terms for classifying are categorizing and
subsuming.
Assessment formats: In constructed response tasks, a student is given an instance and must
produce its related concept or principle. In selected response tasks, a student is given an
instance and must select its concept or principle from a list. In a sorting task, a student is given
a set of instances and must determine which ones belong in a specified category and which
ones do not, or must place each instance into one of multiple categories.
Summarizing
Summarizing occurs when a student suggests a single statement that represents
presented information or abstracts a general theme. Summarizing involves constructing a
representation of the information, such as the meaning of a scene in a play, and abstracting a
summary from it, such as determining a theme or main points. Alternative terms are
generalizing and abstracting.
Sample objectives and corresponding assessments: In inferring, when given a set or series of
examples or instances, a student finds a concept or principle that accounts for them. For
example, In mathematics, an objective could be to learn to infer the relationship expressed as
an equation that represents several observations of values for two variables. An assessment
item asks a student to describe the relationship as an equation involving x and y for situations
in which if x is 1, then y is 0; if x is 2, then y is 3; and if x is 3, then y is 8.
Assessment formats: Three common tasks that require inferring (often along with
implementing) are completion tasks, analogy tasks, and oddity tasks. In completion tasks, a
student is given a series of items and must determine what will come next, as in the number
series example above. In analogy tasks, a student is given an analogy of the form A is to B as
C is to D, such as "nation" is to "president" as "state" is to ____________. The student's task
is to produce or select a term that fits in the blank and completes the analogy (such as
"governor"). In an oddity task, a student is given three or more items and must
determine which does not belong. For example, a student may be given three physics problems,
two involving one principle and another involving a different principle. To focus solely on the
inferring process, the question in each assessment task could be to state the underlying concept
or principle the student is using to arrive at the correct answer.
Comparing
Comparing involves detecting similarities and differences between two or more objects,
events, ideas, problems, or situations, such as determining how a well known event (e.g., a
recent political scandal) is like a less familiar event (e.g., a historical political scandal).
Comparing includes finding one-to-one correspondences between elements and patterns in one
object, event, or idea and those in another object, event, or idea. When used in conjunction with
inferring (e.g., first, abstracting a rule from the more familiar situation) and implementing (e.g.,
second, applying the rule to the less familiar situation), comparing can con tribute to reasoning
by analogy. Alternative terms are contrasting, matching, and mapping.
Sample objectives and corresponding assessments: In comparing, when given new information,
a student detects correspondences with more familiar knowledge. For example, in social
studies, an objective could be to understand historical events by comparing them to familiar
situations. A corresponding assessment question is "How is the Indian culture revolution like
a family fight or an argument between friends?" In the natural sciences, a sample objective
could be to learn to compare an electrical circuit to a more familiar system. In assessment, we
ask "How is an electrical circuit like water flowing through a pipe?" Comparing may also
involve determining correspondences between two or more presented objects, events, or ideas.
In mathematics, a sample objective could be to learn to compare structurally similar word
problems. A corresponding assessment question asks a student to tell how a certain mixture
problem is like a certain work problem.
Assessment formats:A major technique for assessing the cognitive process of comparing is
mapping. In mapping, a student must show how each part of one object, idea, problem, or
situation corresponds to (or maps onto) each part of another. For example, a student could be
asked to detail how the battery, wire, and resistor in an electrical circuit are like the pump,
pipes, and pipe constructions in a water flow system, respectively.
Explaining
Explaining occurs when a student is able to construct and use a cause-and effect model
of a system. The model may be derived from a formal theory (as is often the case in the natural
sciences) or may be grounded in research or experience (as is often the case in the social
sciences and humanities). A complete explanation involves constructing a cause-and-effect
model, including each major part in a system or each major event in the chain, and using the
model to determine how a change in one part of the system or one "link" in the chain affects a
change in another part. An alternative term for explaining is constructing a model.
Assessment formats: Several tasks can be aimed at assessing a student's ability to explain,
including reasoning, troubleshooting, redesigning, and predicting. In reasoning tasks, a student
is asked to offer a reason for a given event. For example, "Why does air enter a bicycle tire
pump when you pull up on the handle?" In this case, an answer such as "It is forced in because
the air pressure is less inside the pump than outside" involves finding a principle that accounts
for a given event. In troubleshooting, a student is asked to diagnose what could have gone
wrong in a malfunctioning system. For example, "Suppose you pull up and press down on the
handle of a bicycle tire pump several times but no air comes out. What's wrong?" In this case,
the student must find an explanation for a symptom, such as "There is a hole in the cylinder"
or "A valve is stuck in the open position." In redesigning, a student is asked to change the
system to accomplish some goal. For example, "How could you improve a bicycle tire pump
so that it would be more efficient?" To answer this question, a student must imagine altering
one or more of the components in the system, such as "Put lubricant between the piston and
the cylinder."
In predicting, a student is asked how a change in one part of a system will effect a change in
another part of the system. For example, "What would hap pen if you increased the diameter
of the cylinder in a bicycle tire pump?" This question requires that the student "operate" the
mental model of the pump to see that the amount of air moving through the pump could be
increased by in creasing the diameter of the cylinder.
Apply
Apply involves using procedures to perform exercises or solve problems. Thus, Apply
is closely linked with Procedural knowledge. An exercise is a task for which the student already
knows the proper procedure to use, so the student has developed a fairly routinized approach
to it. A problem is a task for which the student initially does not know what procedure to use,
so the student must locate a procedure to solve the problem. The Apply category consists of
two cognitive processes: executing-when the task is an exercise (familiar)-and implementing-
when the task is a problem (unfamiliar). When the task is a familiar exercise, students generally
know what Procedural knowledge to use. When given an exercise (or set of exercises), students
typically perform the procedure with little thought. For example, an algebra student confronted
with the 50th exercise involving quadratic equations might simply "plug in the numbers and
tum the crank."
When the task is an unfamiliar problem, however, students must determine what
knowledge they will use. If the task appears to call for Procedural knowledge and no available
procedure fits the problem situation exactly, then modifications in selected Procedural
knowledge may be necessary. In contrast to executing, then, implementing requires some
degree of understanding of the problem as well as of the solution procedure. In the case of
implementing, then, to understand conceptual knowledge is a prerequisite to being able to apply
procedural knowledge.
Executing
In executing, a student routinely carries out a procedure when confronted with a
familiar task (i.e., exercise). The familiarity of the situation often provides sufficient clues to
guide the choice of the appropriate procedure to use. Executing is more frequently associated
with the use of skills and algorithms than with techniques and methods (see our discussion of
Procedural knowledge on pages 52-53). Skills and algorithms have two qualities that make
them particularly amenable to executing. First, they consist of a sequence of steps that are
generally followed in a fixed order. Second, when the steps are performed correctly, the end
result is a predetermined answer. An alternative term for executing is carrying out.
Assessment formats: In executing, a student is given a familiar task that can be performed using
a well-known procedure. For example, an execution task is "Solve for x: x2 + 2x - 3 = 0 using
the technique of completing the square." Students may be asked to supply the answer or, where
appropriate, select from among a set of possible answers. Furthermore, because the emphasis
is on the procedure as well as the answer, students may be required not only to find the answer
but also to show their work.
Implementing
Implementing occurs when a student selects and uses a procedure to perform an
unfamiliar task. Because selection is required, students must possess an understanding of the
type of problem encountered as well as the range of procedures that are available. Thus,
implementing is used in conjunction with other cognitive process categories, such as
Understand and Create. Because the student is faced with an unfamiliar problem, he or she
does not immediately know which of the available procedures to use. Furthermore, no single
procedure may be a "perfect fit'' for the problem; some modification in the procedure may be
needed. Implementing is more frequently associated with the use of techniques and methods
than with skills and algorithms. Techniques and methods have two qualities that make them
particularly amenable to implementing. First, the procedure may be more like a "flow chart"
than a fixed sequence; that is, the procedure may have "decision points" built into it (e.g., after
completing Step 3, should I do Step 4A or Step 4B?). Second, there often is no single, fixed
answer that is expected when the procedure is applied correctly.
The notion of no single, fixed answer is especially applicable to objectives that call for
applying conceptual knowledge such as theories, models, and structures, where no procedure
has been developed for the application. Consider an objective such as "The student shall be
able to apply a social psychological theory of crowd behaviour to crowd control." Social
psychological theory is Conceptual not Procedural knowledge. This is clearly an Apply
objective, however, and there is no procedure for making the application. Given that the theory
would very clearly structure and guide the student in the application, this objective is just barely
on the Apply side of Create, but Apply it is. So it would be classified as implementing. To see
why it fits, think of the Apply category as structured along a continuum. It starts with the
narrow, highly structured execute, in which the known Procedural knowledge is applied almost
routinely. It continues through the broad, increasingly unstructured implement, in which, at the
beginning, the procedure must be selected to fit a new situation. In the middle of the category,
the procedure may have to be modified to implement it. At the far end of implementing, where
there is no set Procedural knowledge to modify, a procedure must be manufactured out of
Conceptual knowledge using theories, models, or structures as a guide. So, although Apply is
closely linked to Procedural knowledge, and this linkage carries through most of the category
of Apply, there are some instances in implementing to which one applies Conceptual
knowledge as well. An alternative term for implementing is using.
Sample objectives and corresponding assessments: In the social sciences, an objective could
be to learn to determine the major points in research reports. A corresponding assessment item
requires a student to circle the main points in an archaeological report about an ancient Indian
city (such as when the city began and when it ended, the population of the city over the course
of its existence, the geographic location of the city, the physical buildings in the city, its
economic and cultural function, the social organization of the city, why the city was built and
why it was deserted). Similarly, in the natural sciences, an objective could be to select the main
steps in a written description of how something works. A corresponding assessment item asks
a student to read a chapter in a book that describes lightning formation and then to divide the
process into major steps (including moist air rising to form a cloud, creation of updrafts and
downdrafts inside the cloud, separation of charges within the cloud, movement of a stepped
leader downward from cloud to ground, and creation of a return stroke from ground to cloud).
Finally, in mathematics, an objective could be to distinguish between relevant and irrelevant
numbers in a word problem. An assessment item requires a student to circle the relevant
numbers and cross out the irrelevant numbers in a word problem.
Assessment formats: Differentiating can be assessed with constructed response or selection
tasks. In a constructed response task, a student is given some material and is asked to indicate
which parts are most important or relevant, as in this example: ''Write the numbers that are
needed to solve this problem: Pencils come in packages that contain 12 each and cost Rs.2.00
each. John has Rs.5.00 and wishes to buy 24 pencils. How many packages does he need to
buy?" In a selection task, a student is given some material and is asked to choose which parts
are most important or relevant, as in this example: "Which numbers are needed to solve this
problem? Pencils come in packages that contain 12 each and cost Rs.2.00 each. John has
Rs.5.00 and wishes to buy 24 pencils. How many packages does he need to buy? (a) 12,
Rs.2.00, Rs.5.00, 24; (b) 12, Rs.2.00, Rs.5.00; (c) 12, Rs.2.00, 24; (d) 12, 24."
Organizing
Organizing involves identifying the elements of a communication or situation and
recognizing how they fit together into a coherent structure. In organizing, a student builds
systematic and coherent connections among pieces of presented information. Organizing
usually occurs in conjunction with differentiating. The student first identifies the relevant or
important elements and then determines the overall structure within which the elements fit.
Organizing can also occur in conjunction with attributing, in which the focus is on determining
the author's intention or point of view. Alternative terms for organizing are structuring,
integrating, finding coherence, outlining, and parsing.
Attributing
Attributing occurs when a student is able to ascertain the point of view, biases, values,
or intention underlying communications. Attributing involves a process of deconstruction, in
which a student determines the intentions of the author of the presented material. In contrast to
interpreting, in which the student seeks to Understand the meaning of the presented material,
attributing involves an extension beyond basic understanding to infer the intention or point of
view underlying the presented material. An alternative term is deconstructing.
Assessment formats: Attributing can be assessed by presenting some written or oral material
and then asking a student to construct or select a description of the author's or speaker's point
of view, intentions, and the like. For example, a constructed response task is "What is the
author's purpose in writing the essay you read on the Amazon rain forests?" A selection version
of this task is "The author's purpose in writing the essay you read is to: (a) provide factual
information about Amazon rain forests, (b) alert the reader to the need to protect rain forests,
(c) demonstrate the economic advantages of developing rain forests, or (d) describe the
consequences to humans if rain forests are developed." Alternatively, students might be asked
to indicate whether the author of the essay would (a) strongly agree, (b) agree, (c) neither agree
nor disagree, (d) disagree, or (e) strongly disagree with several statements. Statements like
"The rainforest is a unique type of ecological system" would follow.
Evaluate
Evaluate is defined as making judgments based on criteria and standards. The criteria
most often used are quality, effectiveness, efficiency, and consistency. They may be
determined by the student or by others. The standards may be either quantitative (i.e., Is this a
sufficient amount?) or qualitative (i.e., Is this good enough?). The standards are applied to the
criteria (e.g., Is this process sufficiently effective? Is this product of sufficient quality?). The
category Evaluate includes the cognitive processes of checking judgments about the internal
consistency) and critiquing judgments based on external criteria). It must be emphasized that
not all judgments are evaluative. For example, students make judgments about whether a
specific example fits within a category. They make judgments about the appropriateness of a
particular procedure for a specified problem. They make judgments about whether two objects
are similar or different. Most of the cognitive processes, in fact, require some form of judgment.
What most clearly differentiates Evaluate as defined here from other judgments made by
students is the use of standards of performance with clearly defined criteria. Is this machine
working as efficiently as it should be? Is this method the best way to achieve the goal? Is this
approach more cost effective than other approaches? Such questions are addressed by people
engaged in Evaluating.
Checking
Checking involves testing for internal inconsistencies or fallacies in an operation or a
product. For example, checking occurs when a student tests whether or not a conclusion follows
from its premises, whether data support or disconfirm a hypothesis, or whether presented
material contains parts that contradict one another. When combined with planning (a cognitive
process in the category Create) and implementing (a cognitive process in the category Apply),
checking· involves determining how well the plan is working. Alternative terms for checking
are testing, detecting, monitoring, and coordinating.
Sample objectives and corresponding assessments: In checking, students look for internal
inconsistencies. A sample objective in the social sciences could be to learn to detect
inconsistencies in persuasive messages. A corresponding assessment task asks students to
watch a television advertisement for a political candidate and point out any logical flaws in the
persuasive message. A sample objective in the sciences could be to learn to determine whether
a scientist's conclusion follows from the observed data. An assessment task asks a student to
read a report of a chemistry experiment and determine whether or not the conclusion follows
from the results of the experiment.
Assessment formats: Checking tasks can involve operations or products given to the students
or ones created by the students themselves. Checking can also take place within the context of
carrying out a solution to a problem or performing a task, where one is concerned with the
consistency of the actual implementation (e.g., Is this where I should be in light of what I've
done so far?).
Critiquing
Critiquing involves judging a product or operation based on externally imposed criteria
and standards. In critiquing, a student notes the positive and negative features of a product and
makes a judgment based at least partly on those features. Critiquing lies at the core of what has
been called critical thinking. An example of critiquing is judging the merits of a particular
solution to the problem of acid rain in terms of its likely effectiveness and its associated costs
(e.g., requiring all power p1ants throughout the country to restrict their smokestack emissions
to certain limits). An alternative term is judging.
Sample objectives and corresponding assessments: In critiquing, students judge the merits of a
product or operation based on specified or student-determined criteria and standards. In the
social sciences, an objective could be to learn to evaluate a proposed solution (such as
"eliminate all grading") to a social problem (such as "how to improve K-12 education") in
terms of its likely effectiveness. In the natural sciences, an objective could be to learn to
evaluate the reasonableness of a hypothesis (such as the hypothesis that strawberries are
growing to extraordinary size because of the unusual alignment of the stars). Finally, in
mathematics, an objective could be to learn to judge which of two alternative methods is a more
effective and efficient way of solving given problems (such as judging whether it is better to
find all prime factors of 60 or to produce an algebraic equation to solve the problem "What are
the possible ways you could multiply two whole numbers to get 60?").
Assessment formats: student may be asked to critique his or her own hypotheses or creations
or those generated by someone else. The critique could be based on positive, negative, or both
kinds of criteria and yield both positive and negative consequences.
Create
Create involves putting elements together to form a coherent or functional whole.
Objectives classified as Create have students make a new product by mentally reorganizing
some elements or parts into a pattern or structure not clearly present before. The processes
involved in Create are generally coordinated with the student's previous learning experiences.
Although Create requires creative thinking on the part of the student, this is not completely free
creative expression unconstrained by the demands of the learning task or situation.
To some persons, creativity is the production of unusual products, often as a result of some
special skill. Create, as used here, however, although it includes objectives that call for unique
production, also refers to objectives calling for production that all students can and will do. If
nothing else, in meeting these objectives, many students will create in the sense of producing
their own synthesis of information or materials to form a new whole, as in writing, painting,
sculpting, building, and so on.
Although many objectives in the Create category emphasize originality (or uniqueness),
educators must define what is original or unique. Can the term unique be used to describe the
work of an individual student or is it reserved for use with a group of students (e.g., "This is
unique for a fifth-grader")? It is important to note, however, that many objectives in the Create
category do not rely on originality or uniqueness. The teachers' intent with these objectives is
that students should be able to synthesize material into a whole. This synthesis is often required
in papers in which the student is expected to assemble previously taught material into an
organized presentation.
Although the process categories of Understand, Apply, and Analyze may involve
detecting relationships among presented elements, Create is different because it also involves
the construction of an original product. Unlike Create, the other categories involve working
with a given set of elements that are part of a given whole; that is, they are part of a larger
structure the student is trying to understand. In Create, on the other hand, the student must draw
upon elements from many sources and put them together into a novel structure or pat tern
relative to his or her own prior knowledge. Create results in a new product, that is, something
that can be observed and that is more than the student's beginning materials. A task that requires
Create is likely to require aspects of each of the earlier cognitive process categories to some
extent, but not necessarily in the order in which they are listed in the Taxonomy Table.
We recognize that composition (including writing) often, but not always, requires the
cognitive processes associated with Create. For example, Create is not involved in writing that
represents the remembering of ideas or the interpretation of materials. We also recognize that
deep understanding that goes beyond basic understanding can require the cognitive processes
associated with Create. To the extent that deep understanding is an act of construction or
insight, the cognitive processes of Create are involved.
The creative process can be broken into three phases: problem representation, in which a
student attempts to understand the task and generate possible solutions; solution planning, in
which a student examines the possibilities and devises a workable plan; and solution execution,
in which a student successfully carries out the plan. Thus, the creative process can be thought
of as starting with a divergent phase in which a variety of possible solutions are considered as
the student attempts to understand the task (generating). This is followed by a convergent
phase, in which the student devises a solution method and turns it into a plan of action
(planning). Finally, the plan is executed as the student constructs the solution (producing). It is
not surprising, then, that Create is associated with three cognitive processes: generating,
planning, and producing.
Generating
Generating involves representing the problem and arriving at alternatives or hypotheses
that meet certain criteria. Often the way a problem is initially represented suggests possible
solutions; however, redefining or coming up with a new representation of the problem may
suggest different solutions. When generating transcends the boundaries or constraints of prior
knowledge and existing theories, it involves divergent thinking and forms the core of what can
be called creative thinking.
Generating is used in a restricted sense here. Understand also requires generative
processes, which we have included in translating, exemplifying, summarizing, inferring,
classifying, comparing, and explaining. However, the goal of Understand is most often
convergent (that is, to arrive at a single meaning). In contrast, the goal of generating within
Create is divergent (that is, to arrive at various possibilities).An alternative term for generating
is hypothesizing.
Planning
Planning involves devising a solution method that meets a problem's criteria, that is,
developing a plan for solving the problem. Planning stops short of carrying out the steps to
create the actual solution for a given problem. In planning, a student may establish sub-goals,
or break a task into subtasks to be performed when solving the problem. Teachers often skip
stating planning objectives, instead stating their objectives in terms of producing, the final stage
of the creative process. When this happens, planning is either assumed or implicit in the
producing objective. In this case, planning is likely to be carried out by the student covertly
during the course of constructing a product (i.e., producing). An alternative term is designing.
Sample objectives and corresponding assessments: ln planning, when given a problem
statement, a student develops a solution method. In history, a sample objective could be to be
able to plan research papers on given historical topics. An assessment task asks the student,
prior to writing a research paper on the causes of the Indian Revolution, to submit an outline
of the paper, including the steps he or she intends to follow to conduct the research. In the
natural sciences, a sample objective could be to learn to design studies to test various
hypotheses. An assessment task asks students to plan a way of determining which of three
factors determines the rate of oscillation of a pendulum.. In mathematics, an objective could be
to be able to lay out the steps needed to solve geometry problems. An assessment task asks
students to devise a plan for determining the volume of the frustum of a pyramid (a task not
previously considered in class). The plan may involve computing the volume of the large
pyramid, then computing the volume of the small pyramid, and finally subtracting the smaller
volume from the larger.
Producing
Producing involves carrying out a plan for solving a given problem that meets certain
specifications. As we noted earlier, objectives within the category Create may or may not
include originality or uniqueness as one of the specifications. So it is with producing objectives.
Producing can require the coordination of the four types of knowledge. An alternative term is
constructing.
Sample objectives and corresponding assessments: In producing, a student is given a functional
description of a goal and must create a product that satisfies the description. It involves carrying
out a solution plan for a given problem. Sample objectives involve producing novel and useful
products that meet certain requirements. In history, an objective could be to learn to write
papers pertaining to particular historical periods that meet specified standards of scholarship.
An assessment task asks students to write a short story that takes place during the Indian
Revolution. In science, an objective could be to learn to design habitats for certain species and
certain purposes. A corresponding assessment task asks students to design the living quarters
of a space station. In all these examples, the specifications become the criteria for evaluating
student performance relative to the objective. These specifications, then, should be included in
a scoring rubric that is given to the students in advance of the assessment.
Assessment formats: A common task for assessing producing is a design task, in which students
are asked to create a product that corresponds to certain specifications. For example, students
may be asked to produce schematic plans for a new institution that include new ways for
students to conveniently store their personal belongings.
Assessment Procedures
Assessment methods are the strategies, techniques, tools and instruments for collecting
information to determine the extent to which students demonstrate desired learning outcomes.
Several methods should be used to assess student learning outcomes. Relying on only one
method to provide information about the program will only reflect a part of students’
achievement. Additionally, student learning outcome may be difficult to assess using only one
method. For each student learning outcome, a combination of direct and indirect assessment
methods should be used. For example, responses from student surveys may be informative,
however, when combined with students’ test results they will be more meaningful, valid, and
reliable.
Principles of Assessment
Assessment will be valid
Assessment will be explicitly designed to measure student achievement of the intended
learning outcomes, and all intended learning outcomes will be summatively assessed. The
processes for the approval of new modules and programmes, and for amending existing
modules and programmes, will ensure that assessment is an integral part of module and
programme design, and the ongoing validity of assessment will be considered through annual
and periodic review.
Assessment will be reliable
To ensure the level of consistency that is necessary for assessment to be reliable, all awards at
the same academic level will be aligned with the institution generic qualification descriptor,
level descriptor and assessment criteria for that level of award.
Assessment will be equitable
Different assessment methods may be appropriate for different learning styles, and it therefore
encourages all programmes to employ (in a way that is consistent with the intended learning
outcomes being assessed) a diversity of assessment methods to allow all students to
demonstrate their knowledge, understanding and skills.
Assessment will be explicit and transparent
Prior to undertaking any assessment task, students will be clearly informed of the purpose and
requirements of the task and will be provided with the specific assessment criteria that will be
used for marking it. Feedback to students will be related to the stated learning outcomes and
specific assessment criteria. Clear information on the policies and processes relating to
assessment will be easily available to all involved in the assessment process.
Assessment will support the student learning process
All assessment tasks influence the way in which students approach their learning, and this will
be taken into account in the design of all assessment tasks.
Assessment will be efficient
Assessment will be efficient for both students and staff such that learning outcomes are not
overly assessed and that knowledge and skills can be sampled.
Direct Method of Assessment
Direct method of assessment will provide the exact outcome of the classroom. The evidence of
the direct assessment is concrete like quantifiable, measurable and visible. It clearly shows the
student learning in a course. It gives the direction to the faculty members that what is the
understanding of the subject and with the understanding what they can do?. This method used
commonly by most of the faculty members. There are different methodologies of direct
assessment method.
1. Standardized Examination
2. Quiz
3. Simulations
4. Demonstrations
5. Capstone Projects
6. Portfolios
7. Oral Exams
So, the strength of direct measurement is, the faculty members are getting the concrete evidence
of a sample what students can do with their student learning.
But at the same time, direct measurement has its own weakness as some of the teaching learning
process components cannot be evaluated directly.
Indirect method of assessment
Indirect measurement mostly results the learning experience by the students. It gives
the opinion or survey of student learning. Indirect measurement is completely perception of
their learning. For example, the teacher like to know, what is the interest rate of the students in
a particular subject. This kind of evidence the faculty members could collect only from the
indirect measurement. It gives the affective domain components. The main disadvantage of
the indirect measurement is that it is not useful in identifying specific knowledge and skills of
the student.
Source : Assessment 101: Assessment Tips with Gloria Rogers, Ph.D. Direct and Indirect
Assessment, August 2006
However, considering the evidence parameter, indirect measures are not as strong as
direct measures because assumptions must be made about what exactly the self-report means.
If students report that they have attained a particular learning outcome, how can that report be
validated? An indirect assessment is useful in that it can be used to measure certain implicit
qualities of student learning, such as values, perceptions, and attitudes, from a variety of
perspectives. However, in the absence of direct evidence, assumptions must be made about
how well perceptions match the reality of actual achievement of student learning.
It is important to remember that all assessment methods have their limitations and contain some
bias. A meaningful assessment program would use both direct and indirect assessments from a
variety of sources (students, alumni, faculty, employers, etc.). This use of multiple assessment
methods provides converging evidence of student learning. Indirect methods provide a valuable
supplement to direct methods and are generally a part of a robust assessment program.
• Proportion of upper-level
courses compared to the same
program at other institutions
• Graduate school placement
rates
• Performance on tests of
writing, critical thinking, or
general knowledge • Locally developed,
• Rubric scores for class commercial, or national surveys
assignments in General of student perceptions or self-
Education, interdisciplinary report of activities (e.g.,
core courses, or other courses National Survey of Student
required of all students Engagement)
• Performance on achievement • Transcript studies that examine
Institutional
tests patterns and trends of course
• Explicit self-reflections on selection and grading
what students have learned • Annual reports including
related to institutional institutional benchmarks (e.g.,
programs such as service graduation and retention rates,
learning (e.g., asking students grade point averages of
to name the three most graduates, etc.)
important things they have
learned in a program)
CONSTRUCTION OF TEST ITEMS AND QUESTIONS
INTRODUCTION
Teachers are concerned with their students achieving the specified learning outcomes in the
subjects they teach. They have to test the achievement of all those learning outcomes. Any
testing device therefore attempt to test the entire content prescribed and taught by teachers.
Essay questions that are generally used in assignments by their very nature can only cover a
limited content. Objective items are most suitable for a wider coverage of content. In any
assessment of students, teachers must ensure objectivity and reliability of assessment. This unit
describes the rules for constructing different types of test items and questions and their
advantages and limitations.
The Selection question provides students with alternate answers from which to choose. The
correct answer. The following are the types of Selection items
• Multiple Choice
• True /false (also called Alternate response)
• Matching
Construction
• The question must be simple, clear and unambiguous
• Scope of answer must be limited by the use of words such as 'List 'give reasons', 'define'
etc,.
• Questions must be interpretable in the same way by all students.
Example:
Write the quadratic Equation – Expected factual knowledge of the student
A ball is thrown straight up, from 10 m above from the ground, with a velocity of 20 m/s. When
does the ball will hit the ground? (Ignore the air resistance) – This question will bring the
conceptual knowledge and procedural knowledge by applying quadratic equation, how to solve
the problem. So, problem solving method brings the students conceptual knowledge and
procedural knowledge.
Example:
In a packet switching network, packets are routed from source to destination along a
single path having two intermediate nodes. If the message size is 48 bytes and each packet
contains a header of 3 bytes, then what is the optimum packet size?
The above question will extract the concept knowledge which is behind the packet switching
network. Solving the problem ensure the students understanding clearly.
In software engineering, the problem solving will be split large complex goals into
small, simpler ones, try to think different kind of a parallel solution of each one, make the
problem as abstract so that the problem can be applied in an another same abstract in an another
issue, learn to use the existing solutions instead re-inventing the wheel and think in terms of
data flow. This procedure would give the clear idea to the student to approach a problem in
software engineering.
True / Fa1se items
True/False items give students two choices from which to select the correct answer. A
statement is presented and students are asked to indicate whether the statement is true or false
as stated. There are only two choices for the student. These choices may be altered to suit the
needs of the test situation Yes/No, Right/Wrong, Correct/ln-correct
True/False items are most often used to test a student's ability to
• recognize correctness a statement
• identify relationship
• identify attitudes, values and beliefs
• cause and effect relationship
• identify fact
• identify a new situation where principles are applicable
This type of item is most suitable when
• there are only two alternatives which are plausible
• there is only one correct response to the question
• a large amount of content needs to be tested
• reduction of reading on the test is important
• easy scoring is desired
True/False items are also called constant alternate response type.
Example
A ceiling fan in a room will push warm air downward
Narcotics are painkillers
Insects can be characterized by their three distinct body parts
Construction
The item should include only on central and significant idea.
Example:
The second method that can be used to determine the difficulty of a question is to run a test
analysis programme on test questions.
• This question has two ideas contained in the statement. A student may be able to choose
'yes' or 'no' to both ideas. It may be 'yes' for one idea and 'no' for the other idea or vice
versa. It may be true for both ideas.
• The statement must be precise so that it can be judged as absolutely true or absolutely
false.
Flowers bloom in the springtime.
This statement is partly true in that many flowers do bloom in the spring but flowers also bloom
in other seasons. The statement is neither true nor false. It is partly true and partly false.
• The statements are to be short and written in simple language.
An individual with blood type AB negative may receive blood from any other individual
because there are no antigens in the recipient's blood to cause a reaction with antibodies that
may be in the donor's blood.
This statement is a very long one. It requires considerable time for students to decipher the
meaning of this question. It would be better to ask first if a person with AB negative blood can
receive blood from anyone else. Then ask about the presence of antigens and antibodies.
Column A Column B
1. Best for measuring computational a. Matching item skill
2. Least useful for educational b. Multiple Choice diagnosis
3. Most difficult to score objectively c. True-False item
4. Provides high scores by guessing d. Short answer item alone
5. Measures greater variety of learning outcomes
6. Measures learning at recall level
• Give very clear instructions about how students must write the answers to each item, where
they are to mark their answers.
• The acceptable format for numbering matching questions is to place numbers in front of
the premises on the left place letters in front of the responses on the right
• Keep the lists as short as possible
• Arrange the lists in a logical order. If dates are used it is preferable to put them in a
chronological order.
• Use proper numbering for both columns. Items in Column may be given alphabetical
numbers while those in Column may be given numerical numbers.
An example:
The transfer ofheat in a steel bar from one end to the other end is by
a) Conduction
b) Convection
c) Radiation
d) Fusion
In the above example " the transfer of heat from one end of the steel bar to the other end" is the
main question. This is at the top of the item. This is the question to which the student must
select the correct answer. This statement or question is called Stem. The Stem can be either in
the form of a direct question or an incomplete sentence. This acts as a stimulus to evoke the
correct response from students. The alternatives provided as possible answers are called
Options. In the example four options are given. The student has to choose the correct answer
from the options. There may be four or even five options. In the example given items at a, b, c,
d are options.
The correct answer is called the Key. In the example option (a) is the key. Other than the correct
answer are called Distracters. Options b, c, and d are the distracters.
Example:
Voltage drop in a resistor is NOT proportional to
a) current
b) resistance
c) power dissipation
d) physical dimensions or size
(Notice NOT, the negative. This is given in capital to emphasize. It may also be underlined).
Stem must be a complete question by itself not requiring the student to read the options in order
to discover what is being asked
Example:
When two resistors of value 10 ohms and 30 ohms are connected in series, the net resistance
value will be
a) 3 ohms
b) 20 ohms
c) 40 ohms
d) 300 ohms
In this item, the student could work-out the answer without referring to the response, since the
stem is a complete question by itself.
Content of the question must be made clear to avoid confusion. State the stem of the item in a
simple clear sentence. Use simple language so that students understand the statement without
much difficulty
Example:
Poor construction
The paucity of plausible, but incorrect statement that can be related to a central idea poses a
problem when constructing which one of the following types of test items?
a) Short answer
b) True- False
c) Multiple choice
d) Essay
Better constructed item
The lack of plausible but incorrect alternatives will cause the greatest difficulty when
constructing
a) Short answer question
b) True-False
c) Multiple Choice item
d) Essay
Put as much of the wording as possible in the item of the item and anything that needs repeating
in each option should be included in the stem.
Example:
In objective testing the term objective
a) refers to the method of identifying the learning outcomes
b) refers to the method of selecting the test content
c) refers to the method of presenting the problem
d) refers to the method of scoring the answers.
The phrase 'refers to the method' repeats itself in all the four options. It must be taken to the
stem. The stem must then read In objective testing the term objective refers to the method of '
The options must be closely related to the stem.
Example:
The property of a circuit that tends to oppose a change in current is called
a) Conductance
b) Voltage
c) temperature
d) Inductance
In the above example b and c are not properties of a circuit. These are not good options. Better
options would be to replace b and c by
b) capacitance
c) resistance
The options should be parallel to structure i.e. they should fit grammatically with the stem.
Grammatical consistency ofall options is very important.
Example:
The station where an aircraft is taken for repairs is called an
a) apron
b) hanger
c) tower
d) workshop
In this example only one option fits the grammatical structure of the stem. In order to improve
this stem may end with .... is called an
The item must not contain clue to the student such as combination ofsingulars and plurals in
the options.
Example:
The direction of propagation ofan electromagnetic wave in the free space is
a) along the electric field
b) along the magnetic field
c) in the plane of electric and magnetic field
d) perpendicular to the surface of containing the two fields,
In the above example the precision and length of the key option d makes it stand out from the
rest. To avoid it, the phrase in d must be inserted appropriately in each of the answer.
Example:
An ion is
a) a charged particle
b) an atom which has gained or lost electrons
c) a neutral particle
d) formed in electrolytes
Here the stem is vague and three of the options given are acceptable.
Distracters must be incorrect yet likely to be plausible to weaker students. This means that the
distracters must be believable.
Example:
Waste and overflow fittings for a bathtub are installed
a) before the bathtub is set in place
b) after the bathtub is set in place
c) at the same time as the trap
d) none of the above.
It is unlikely that any student would choose d as the answer; particularly since all the other
options are likely alternatives,
Another example:
A person invested Rs 500 in a business. He sold goods worth Rs 550 in this business. The %
profit he got was
a) Rs.50
b) Rs.10
c) 50%
d) Rs 550
The items should not be very lengthy and involve lengthy calculations Example:
What is the equivalent resistance of 330 K ohms and a 100 K ohm resistor connected in
parallel?
b) 76.74 K ohms
c) 82.05 K ohms
d) 120 K ohms
e) 430 K ohms
The student has to work through lengthy calculations to arrive at the correct answer. The item
should test the understanding of the principle of resistors in parallel. It is not expected to test
the ability of calculation. The item may be reworded suitably.
The level of information required to reject wrong responses should not be higher than that
Coulomb is the unit of measurement of
a) inductive reactance
b) electric charge
c) band width
d) trans conductance
Required selecting a correct response
Here to reject options a, c, and d a higher level of information is required than to select the key.
Hence, a, c, and d are poor distracters. The options for the item may be rewrintten to suit the
level of learning under test. The options may be modified as under
a) resistance
b) charge
c) power
d) potential difference
Advantages of multiple-choice items
• versatility in measuring all levels of cognitive ability
• highly reliable test scores
• scoring efficiency and accuracy
• objective measurement of student achievement or ability
• a wide sampling of content or objectives
• a reduced guessing factor when compared to True-False items
• different response alternatives which can provide diagnostic feedback
SPECIFIC
Constant Alternative type
1. Does the item include only one significant idea in each statement?
2. Is the statement so precise that it can be judged unequivocally true or false?
3. Is the statement short and in simple language?
4. Does the item use negative statements sparingly and avoid double negatives?
Multiple Choice
1. Is the stem concise and unambiguous? Is the negative(if unavoidable) emphasized?
2. Is the stem a complete question by itself? Does the item require the student to read the
options to discover what is being asked?
3. Is the content of the question clear?
4. Does the stem include anything that needs to be repeated in every option, within itself?
5. Are the options parallel in content?
6. Are the options parallel in structure?
7. Is the item devoid of any clues such as mix up of singular, plural, precision and length
of key option etc.?
8. Is the key option unarguably correct?'
9. Are the distracters plausible?
10. Does the item exclude ' 'all these"?
11. Is the language used in the item appropriate to the vocabulary of students at this level?
12. Does the item avoid similarity of wording in both stem and the correct answer?
13. Does the item exclude responses that are "all inclusive"?
14. Does the item use an efficient format?
Matching type
1. Does the item include only homogeneous material In the premises"?
2. Is the number of responses sufficiently large so that the last of their premises can still
have many options to choose from?
3. Does the item specify the basis of matching, type of matching, kind of entry etc?