[go: up one dir, main page]

100% found this document useful (1 vote)
914 views19 pages

Unit 17 PDF

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 19

Structure

17.1 Introduction
17.2 Objectives
17.3 Curriculum Evaluation: Nature and Purpose
17.3.1 Evaluation Questions
17.3.2 Defining Curriculum Evaluation
17.4 Approaches to Cumculum Evaluation
17.4.1 Scientistic and Humanistic Approaches
17.4.2 Intrinsic and Pay-off Evaluation >
17.4.3 Formative and SummativeEvaluation
17.5 CumculumEvaluation Models
17.5.1 Merfessel-Michael Model

17.5.3 Discrepancy Evaluation Model


17.5.4 The CIPP Model
17.5.5 Connoisseurship Model
17.6 Cumculum Evaluation Phases
17.7 Characteristics of Evaluation
17.8 Participants in Evaluation
17.9 Let Us Sum Up
17.10 Unit-end Activities

17.12 Answers to Check Your Progress

So far we have discussed the concepts of cumculum, its bases, planning the curriculum
and its development. The process of curriculum development consists of cumculum

part of the curriculum development process. Cumculum evaluation assesses the


effectiveness of a cumculum and enables us to compare what we have achieved with
what we had set out to achieve.
None of us can dispute the importance and necessity of curriculum evaluation. Yet 1
those responsible for evaluating cumeula do not always consider it a useful process.
Many schools do not have the required resources to conduct evaluation and school 1
confusion. Yet various aspects of the school process, which comtribute to student
learning, can be evaluated. For example, the usefulness of programme objectives, the
degree to which students achieve those objectives, the contents activities, the plans,
methods that are a part of the cumculum can be evaluated. Teachers* performance,
learner skills, behaviours, achievement ete. can be subjected to evaluation. Evaluation
occurs at various levels from national to classroom and involves a varietv of moule -
Curriculum Evaluatlon
Curricl~lurnand its a idealization value
Various Aspects
a decision value
Let us take up each of these value questions in the given order for discussion.
The question of intrinsic value
It addresses the appropriateness of a curriculum in a given context. It deals with the
cumculum as planned and also with the finished curriculum as it is delivered.
The question of instrumental value
It attempts to clarify:
a What the curriculum is good for?
a Who the intended audiencehatget group is?
The curriculum planned is linked with the goals and objectives stated for the programme.
It tries to find out, whether what is planned in the curriculum will be athined, to what
extent, by whom, i.e. the target group. Hence, the target group should be identified at
the beginning of the curriculum activity itself. Evaluation efforts should identify the
types of students who are likely to benefit the most ftom the curriculum being planned.
The question of comparative value
Such a question is often raised when new programmes are introduced. Usually new
programs are created when people feel that the existing programme is inadequate.
Often, when dealing with the question of comparative value,'we get caught up'in
making comparisons of two dissimilar programs, with different objectives/god~.We
cannot ask, for example, whether or not a programme that stresses skill trainin& is
better than one that stresses value-structure of the world. Being different, a comparison
of such programs will be of little help for purposes of evaluation. Hence, comparisons
are useful in the case of identical programs. Here comparison of programmes includes
ease of delivery, cost, student achievement, demand on reso'ntces, community-
responsiveness or otherwise, role in the school organization, etc.
The question of idealization value
This question requires continued action throughout the delivery of the new programme.
Educators must constantly ask themselves how they might fine tune the program's
content, materials, methods, and so on, so that students can derive optimal benefits
from qxperiencing it.
The question of decision value
The main focus of this question is on decision-making, i.e., whether to retain, modify,
or discard the new programme. It is an ongoing question, because at every stage of
curriculum development and delivery a decision has to be taken. [Omstein and Hunkins,
1988.1

17.3.2 Defining Curriculum valuation


Put together five questions mentioned above should help us suggest that evaluation is
a process by which we can make decisions about a curriculum in terms of course
improvement, individuals involved - teachers, students etc. and administrative
effectiveness.
Before we proceed to a detailed discussion on the technical details of curriculum
evaluation, we must have a brief idea of these two terms, i.e. 'Curriculum' and
'Evaluation', 'Curriculum' is a system of learning experiences deliberately designed
and transacted for realising certain goals. As for evaluation, it is a systematic process
of determining and appraising the proficiency level of a system or a practice. Applied
94 to curriculum, evaluation focuses on discovering whether the cumculum as designed,
developed and implemented, is producing or can produce the desired results. Evaluation Curriculum Evaluation
serves to identify, the strengths and weaknesses of the curriculum before implementation
and the effectiveness of its delivery after implementation.
Worthing and Sanders (1973) have defined evaluation as "the determination of the
worth of a thing. It includes obtaining information for use in judging the work of a
programme, product, procedure, or objective, or the potential utility of alternative
approaches designed to attain specified objectives".
Bruce Tuckinan ( 1979)has defined evaluation as ". ..the means for determing whether
the programme is meeting its goals: that is, whether the measures/outcomes for a
given set of instluctional inputs match the intended or pre-specified outco&s".
All the above definitions point out that decision-making is central to evaluation.
Evaluation enqbles educators to identify alternative curricular actions and determine
varioub combinations of curricula to ensure maximum student learning in the light of
overall programme goals. (Ornstein and Hunkins, 1988.)
- - -
.
-.
.
-

Check Your Progrese


Notes: a) Write your answers in the space given below. .
b) Check your answers with the one given at the end of the unit.
I . Detine Curriculum Evaluation.
............................................................................................................
1.

.......................... "..................................................................................
...............................................................................................................
...............................................................................................................
2. What are five questions pertinent to curriculum evaluation?
...............................................................................................................
................................................................................................................
...............................................................................................................
...............................................................................................................

17.4 APPROACHES TO CURRICULUM


EVALUATION
Evaluation may be considered as a broad and continuous effort to find out the effects
of implementing content and procedures to achieve pre-set goals. It is not content
specific but is a methodological process. Michael Scriven feels that evaluation essentially
co~~sists of gathering and combining data in relation to a weighted set of goals or scales so
as to allow people to make judgements about worth. [Ornstein and Hunkins, 1988.1
How people process data is determined to a large extent by their philosophical and
psycholog~calonentations. Humanists would argue that qbantitative expression of
lqarning outcomes are insufficient to determine the quality of learning. They feel that
the karning experience is important in itself and should have helped the students in
enhancing their self-concept. e

A behaviourist,would Approach evaluation from a sequenced orientation, i.e. objectives


will be clearly stated and relevant activities would be performed to achieve the intended
outcomes. Whatever the orientation or posture adopted by the educator, evaluation
st 111 involves two dimentions - management and decision-making. They have to obtain
data on which judgements will be based; communicate the effectivenessof curriculum
to students and others; determine criteria to judge various aspects of curriculum and
devise a management plan fqt. all involved in the curriculum process. 95
Curriculum and its 17.4.1 Scientistic and Humanistic Approaches
Various Aspects
Cronbach (1982) has identified two approaches to evaluation - the scientistic ideals
approach and the humanistic ideals approach. He has presented these two approaches
at the two ends of an evaluation continuum. The scientistic end advocates
experimentation and the humanistic end does not have faith in experimentation. The
scientistic ideals believer focusses on experiment:
"A true experiment ... concentrates on outcome or impact and embodies three
procedures : ( I ) Two or more conditions are in place, at least one of them being the
consequence of deliberative intervention. (2) Persons or institutions are assigned to
conditions in a way that creates equivalent groups. (3) All participants are assessed
on the same outcome measures" (Cronbach, 1982).
In this approach all efforts are focussed on the learners. Students' achievements in
different situations are compared by way of test scores. Quantitative measures are
adopted for data collection and statistical tools are employed for data analysis.
The humanistic ideals approach according to Cronbach is on the other end of the
evaluation continuum. He describes it as very different from the scientific ideals
approach :
"Writers at the humanistic extreme find experimentsunacceptable. For them, naturalistic
case studies are the panacea. A humanist would study a program already in place, not
one imposed by the evaluator. If persons are assigned to a treatment, that is because
the policy under study calls for assignment; assignmepts are not made for the sake of
research. The programme is to be seen through the eyes of its developers and clients.
Naturalistic investigators would ask different questions of different programmes.
Benefits are to be described, not reduced to a quality. Observations are tq .be
opportunistic and responsive to the local scene and not pre-structured.
Analysis of data collected through humanistic approach differs significantly from that
collected through scientific approach. Data collected through the former are more
qualitative than quantitative. The techniques employed are basically observation,
interviews, personal meetings and discussions with participants.
However cumculum evaluators tend to adopt a middle approach i.e. somewhere
between the two ends of the continuum.

17.4.2 Intrinsic and Pay-off Evaluation


Evaluators may look at a cumcular programme directly while others could study it
quantitatively after it is implemented. The first type is called intrinsic evaluation by
Michael Scriven (1978). The evaluators merely answer the question, "How good is
the curriculum? ",instead of evaluating it on any criteria. Scriven cites the example of
studying an axe to explain intrinsic evaluation. An individual would study an axe by
examining the following aspects; design of the bit, the material used, the weight
distribution, shape and fit of the handle. People assume that an axe of such dimensions
would cut trees but they do not try it directly. Intrinsic evaluation of curricula implies
that evaluators study the content, its sequence, organization, accuracy, learning
experiences provided etc. They believe that with an accurate content and organization
student learning would be stimulated. Most of the times evaluators tend to neglect the
concept of intrinsic evaluation. Instead of asking the question, "How good is the
cumculum?'They ask, "How well does the course or cumculum achieve its goals?"
Educators must however egtablish the worth of the cumculum, its goals, objectives
and related content. According to Scriven, pay-off evaluation occurs when the effects
of the delivered curriculum are examined and its worth has been established. The
effects of the cumculum on learners can be determined since this evaluation involves
judgements based on pretest post-test scores or experimental group tests and control
group tests and other parameters. Apart from students, its effects can be examined
on teachers, parents and administrators. This allows evaluators to measure the Curriculum Evaluation
attainment of objectives by learners which intrinsic evaluators cannot gauge. On the
other hand, supporters of intrinsic evaluation counter that outcomes of curriculum do
not actually show up because the present testing instruments and scoring procedures
are laced with their short-comings. They also feel that to examine the full worth of a
curriculum, the materials should be looked at directly rather than at students' test
scores. (Ornstein and Hunkins, 1988.)

17.4.3 Formative and Summative Evaluation


Choice of evaluation techniques also depends on the kind of decisions that evaluators
have to make. In this context, two terms are used, formative and summative evaluation.
Formative evaluation aims to improve an existing programme based on the feedback
obtained from the evaluation. Hence, programme developers must be frequently
lxov~dedwith detailed and specific information to guide them in the developmental
phase. On this basis evaluators can revise the programme while it is being developed,
before it can be implemented on a large scale. Formative evaluation can occur at
several stages during the curriculum development process. At any stage the validity
of the content can be checked, i.e. whether students are achieving the stated goal or
objective by going through the content, if not then that content could be modified.
Cronbach (1 990) has provided guidelines for conducting formative evaluation in an
article where he has spoken of course improvement through evaluation. The important
steps highlighted are:
1. "Seek data regarding changes produced in pupils by the course.
2. Look for multi-dimensional outcomes and map out the effects of the course
along these dimensions separately.
3. Identify aspects of the course in which revisions are desirable.
4. Collect evidence midway in curriculum development, while the course is still
tluid.
5. Try to find out how the course produces its effect and what factors influence its
effectiveness. You may find that the teacher's attitude toward the learning
opportunity is more important than the opportunity itself.
6. During trial stages, use the teacher's informal reports of observed pupil behaviour
i n aspects of the course.
7. Make more systematic observations, but only after the more obvious flaws in the
early stages have been dealt with.
8. Make a process study of events taking place in the classroom, and u'se proficiency
and attitude measures to reveal changes in pupils.
9. Observe several results of the new programme ranging far beyond the content
of the curriculum itself - attitudes, general understanding, aptitude for further
. learning and so forth."
Evaluators differ in their ways of conducting formative evaluation. If they are evaluating
only one unit plan then it would involve only those teaching the unit. However, if they
are devising a new programme for the entire district then it would involve aformal and
systematic procedure.
Since curriculum development takes place over a span of time it provides opportunity
for guiding and shaping the curriculum. According to Gronlund (1985) it gives the
teachers an opportunity to record both intended and unintended effects. The curriculum
process is kept "open" since feedback is used and adjustments are made. ,
Summative evaluation assesses the effect of a complete programme. It is camed out
at the elid of an educational programme. It gives the picture of the curriculum in
97
Curriculam and Its totality once it has been implemented on the learners. The effectiveness of the entire
Various Aspects
curriculum can be assessed through summative evaluation, or also of a particular
programme or course within the curriculum. This type of evaluation is based on the
evidence about "Summed" effects of various components or units in the cumculum,
and hence it derives its name from it. The people involved in the curriculum process
can conclude how successfully the curriculum has worked.
Since summativeevaluation is carried out at the end of the curriculum activity it should
not be construed as a one time affair only. It can occur at the end of some curricular
- unit plans. Summative evaluation could also be planned at certain points during the
curriculum development process, for example, at the end of the first try out stage
before the final implementation. This would help evaluators to check a curricular
programme as it evolves into the final product. Whereas formative'evaluation uses
informal methods and processes, summativeevaluation uses formal tools for gathering
data. Tests are carefully designed for attainment of objectives. Teachers' reactions
are assessed formally through carefully prepared surveys. Students are assessed
through tests at the end of the course or at the year end.
One of the main pwposes of sumrnative evaluation is to select from several completing
curricularprogrammes,the one, which should be accepted, and those which should be
discontinued. An experimental design would suit the purpose best. James Popham has
illustrated such designs. He talks about the PretestIPosttest control group design.
Students are pretested on specified dimensions of the programs. After instruction,
students in the different programs are tested for the attainment of a common set of
objectives of the programs. Evaluators should not be biased towards any one set of
objectives, but objectives set by other designers should also be tested. Students afy
randomly assigned to the programs so that each has an equal chance of being assigned
to any programme. Differences in learner achievement would be due to differences in
the programmes. Here the experimental unit of analysis is not the pupil, but schools or
classrooms. If pupils of the same class are subjected to different programs, then the
pupil becomes the unit of analysis. (Popham, 1988.)

Check Your Progress


Notes: a) Write your answers in the space given below.
b) Check your answers with the one given at the end of the unit.
3. Differentiate the following approaches for curriculum evaluation.
i) Scientificand Humanistic.
......................................................................................................
......................................................................................................
ii) Formative and Summative.
......................................................................................................
.......................................................................................................
......................................................................................................

Having looked into the approaches of cumculum evaluation we shall now deal with a
few models of curriculum evaluation.

17.5 CURRICULUM EVALUATION MODELS


The first major evaluation effort directed at cumculum was conducted under the
direction of Ralph Tyler between 1933 and 1941. The study was concerned with the
98
total process of curriculum development and evaluation was an integral part of that Currlculum Evaluation
concern. Tyler, through his efforts as research director of the study, greatly influenced
- and still influences - the planning of evaluation studies. The following stages that
Tyler ( 1942) has recommended in 1942forcumculum evaluation still hold good:
i) Establishing broad goalslobjectives
ii) Classifying objectives.
iii) Defining objectives in behavioural terms
iv) Finding situations in which achievements of objectives can be shown
V) Developinglselecting measurement techniques collecting student performance
data
vi) Comparing data with behaviourally stated objectives.
He maintains that evaluation is a recurring process and that evaluation feedback should
be used to reformulate or redefine objectives. In other words, information gathered
I
could be ploughed into the system to modify the objectives and the programme, which
is being evaluated. This recycling process keeps the evaluation system dynamic.
We shall now touch upon the few evaluation models, which have immediate relevance
to our context.

17.5.1 Metfessel-Michael Model


Metfessel and Michael (1 967) present a model with eight major steps in the evaluation
process. It is a variation of Tylerian model.
The model can be presented diagrammatically as follows:

A variation of the Tyler Model this model clearly suggests among other things that
evaluators should involve all those who will be 'affected' by the curriculum, i.e.,
teacherh, professional organisations, senior citizens, students, etc. besides experts and
conduct periodic observations throughout the implementation and maintenance of the
programme using tests, cases, etc.

17.5.2 Congruence-Contingency Model


Very often curriculum evaluation depends on casual observation, implicit goals, intuitive
norms, subject judgements, etc. However, Stake (1967) stresses on establishing formal
Curriculutn and its evaluation procedures. According to him, formal procedures will help increase the
Various Aspects
objectivity in evaluation. 4 s they aim at furnishing data, we can make descriptions and
judgements of the cumculum being evaluated. Stake argues that for evaluation purposes,
we should not rely only on the statements of objectiveslaims. We should allow all
those 'affected' by the cumculum to extensively participate in judging the curricu:um.
He fu$her maintains that the data can be collected under the following three bodies of
infomiation.
I. Antecedent: This is any condition that exists prior to teaching and learning that
may influence the outcomes. For example, prior knowledge, aptitudes,
psychological profiles of students, etc., years of experience of teachers, teacher-
behaviour, etc.
2. Transactions: Learning transactions that occur between and among teachers
and students, students and students and among students and resource people.
3. .Outcomes: These are the consequences of education - immediate and long-
range, cognitive and conative, personal and community-wide. For example,
students' performance, achievements, etc. Stake, however, lays stress on even
such outcomes as the impact of a new programme on teachers' perception of
their competence.
Stake has presented the three categories of data into a matrix, which is represented
diagrammatically in the figure below:

Congruence

Intended
Congruence transactions

Fig. 17.2: Stake's Congruerice-ContingencyModel.


(Source: Ornstein and Hunkins.)
I

The term contingencies here refer to the relationships among the variables in three
categories: antecedents, transactions and outcomes. Once the evaluator collects views
on a curriculum from various sources like students, teachers, support staff, etc., he
puts them on a matrix to identify the congruencies and contingencies among them.
The model clearly shows that it provides an organizational framework that points to
the data to be considered and [compares what is planned and what has occurred.

17.5.3 Discrepancy Evaluation Model


This model developed by MalcolmProvus (1971) has the following four components:
i) detennining cumculum standards
ii) ' determining cumculum performance
iii) comparing curriculum with standards Curriculum Evaluation

iv) determining whether discrepancy exists between standards set and cumculum.
If there is any discrepancy, it will be communicated to the decision-makers, who, in
turn, have to incorporate necessary modifications at every stage. This they can do by
doing any one of or a combination of the following:
going to the subsequent stage
recycling to a previous stage
starting the curriculum over
~nodifyingthe performancelstandards
terminating the curriculum
A diagrammatic representation is given below:

Stage 1
I
Performance I Standard
P = Peformance
S = Standard
C = Compaeison
TP = Technical
Discrepancy Programme
D = Discrepancy
RP = Recycle

Fig. 17.3: Provus's Discrepancy Evaluation Model.


(Source: Ornstein and Hunkins, 1988.)
Curriculum and it8 Provus model is called discrepancy model because it compares the performance with
Various Aspects
standards to determine whether discrepancy exists between the two (Ronald Doll,
19%).

17.5.4 The CIPP Model


'CIPP' here refers respectively to the first letter of
Context
Input
. Process
Product
Stufflebeam (1971 ) considers evaluation a continuous process and suggests that four
types of decisions are required in evaluation efforts. The four types are:
Planning decisions
Structuring decisions
Implementation decisions
Recycling decisions
Corresponding to these decision types there are four types of evaluations: context,
input, process and product. The following figure shows these types of evaluations in
relating to the four decision types:

Intended Actual

react to attainments.
MEANS
control and refine procedures.

F
ig.17.4: lSpes &Decisionsand Evaluation..
[Source:Ornstein and Hunkins, 1988.1

Let us now take up for discussion each of the four evaluation types.

Context evaluation
It involves studying the environment in which we run the curriculum. Stufflebeam
maintained that context evaluation is the most basic type that provides a rationale for
determining objectives. It helps us
define the relevant environment
portray the desired conditions pertaining to that environment
focus on unmet needs and missed opportunities
diagnose the reason for unmet needs
,
It should suggest that context evaluation is not a one-time activity. It continues to
furnish baseline information regarding the operations and accomplishments of the total
system.

Input evaluation
The purpose of this stage is to provide information for determining how to utilize
resources to meet curriculum goals. At this stage we evaluate alternative designs in
terms of how they will contribute to the attainment of objectives stated and in terms of
their demands upon resources, time and budget. We should consider them in the light Curriculum Evmlumllon
of the procedural feasibility. In contrast to context evaluation, input evaluation is ad-
hoc and micro analytic. It evaluates specific aspects or components of the cumculum

Process evaluation
This stage addresses curriculum implementation decisions that control and manage
the plan or cumculum. Through process evaluation, we can determine the congruency
between the planned and actual activities. Stufflebeam presents the following three
main strategies for process evaluation.
i) To detect or predict defects in the procedural design or its implementation during
the diffusion stages. In dealing with plan or cumculum defects, we should identify
and monitor continually the potential sources for the failure of the curriculum.
The sources may be logistic, financial, etc.
ii) To provide information for cumculum decisions. Here we should make decisions
regarding test development prior to the actual implementation of the cumculum.
Some decisions may require that certain in-service activities be planned and
'carried out before the actual implementation of the cumculum.
iii) To maintain a record of procedures as they occur. It addresses the main features
of the project design. For example, the particular content selected, the instructional
strategies planned or the time allotted in the plan for such activities.
As process evaluation occurs during the production stage of cumculum, it helps us
anticipate and overcome procedural difficulties and make pre-programmed decisions.

Product evaluation
It helps us determine whether the final curriculum product in use accomplishes the .
intended goals. Depending on the data collected, we can decide whether to continue,
terminate or modify a cumculum.

17.5.5 Connoisseurship Model


The model recommends a process called educational criticism and connoisseurship.
Thus, it is markedly different from the other models, which draw heavily on the
quantitative technical posture of evaluation. The connoisseurshipmodel, on the contrary,
tries to furnish a qualitative description of educational life as a consequence of new
programmes.
We should note here that Eisner (1985), the propounder of this model, draws heavily
from the arts to strengthen his stance. He states, for example, that if an individual is to
be an illuminating critic of painting, film etc., he/she must be a connoisseur. In other
words, hdshe must possess a great deal of knowledge about and experience with the
type of phenomenon he/she is to criticize. Further, the critic needs to have an awareness
and appreciation of the subtle qualities of the situation and write about the nuances of
the situation in ways that help others to become more aware of the phenomenon
under consideration. In essence, Eisner points out that educational connoisseurship is
the art of appreciating what is educationally significant. But such appreciation is made
public through criticism - the description, interpretation and assessment of the situation.
In discussing his approach to evaluation, Eisner relies on the following two elements
instead of scientific validity:
i) Referential adequacy: it requires the critic to check the critical observation and
interpretations are empirically grounded. It allows the reader to experience the
evaluated phenomenon in a new and better way.
ii) Structural corroboration: it is a continuous inquiry about whether the various
parts to the criticism fit tbgether as a consistent whole. 10:
-
C l ~ r r i c ~ ~ l and
u l l ~ its Besides, he stresses the importance of analyzing the works of students during the
Various Aspects
evaluation process by noting down what is said and done, rather than what is not done.
Eisner, thus, advocates describing the 'tone' of the curriculum in action and the
educational scene.
- -

Check Your Progress


Notes: a) Space is given below for your answer.
b) Check your answer with the one give at the end of the unit.
4. List two characteristics that distinguish the connoisseurship model from
the other curriculum evaluation models.
e..................................,...........................................................................

...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................

All the Models except Eisner's Connoiseurship model are constructed in the same
manner, i.e. they consist of logical steps arranged in a sequence. The steps in sorrle of
the models are adopted from system analysis which follows an eight step cyele. The
first step is identifying the need, stating objectives, pointing out major constraints,
developing alternative system$,selecting the best alternative, putting one into practice,
evaluating the system and getting feedback for modifications. Eisner's model is based
on describing phenomena in different areas of experience. The participants of the
curricular programmes are the judges since they are involved in the actions. The
participants are teachers, students and administrators. Qualitative techniques are used
to gather data from them eg., interviews, autobiographies etc. In the final report data
are organized, summarized and interpreted. [Doll, 1996.1

17.6 CURRICULUM EVALUATION PHASES


The previous models reveal that a variety of practices are involved in curriculum
evaluation. Although there arc various opinions about the precise steps, it is useful to
know exactly how to proceed through the evaluation process. Whether the approach
is scientistic or humanistic, both have to focus on the curricular phenomenon and
devise means, subjective or objective, to collect information.There should be a plan of
action. A common process that is adopted for evaluation is:

Focus of Evaluation
Evaluators should decide what they will evaluate and how i.e. the focus and design.
They have to determine the precise aspect of the curricular programme to be evaluated,
i.e. whether it will be the entire school system or one school, the entire subject area
curriculum or one unit in the subject etc. For this evaluators will have to define the
objectives, identify the constraints and policies, level of decision-making, a scheduled
time frame for completion of ~perations.Alternative action paths are determined and
criteria identified for assessing results of curricular componenfs.

Collection of Information
Evaluators identify the essential sources from which they will get information and
methods they'll employ to get them. In terms of the time schedule they also work out
the stages of collecting information.
1 04
-

Organizing the Information Curriculum Evaluation

The information is organized in a manner that is easily understood and used by the
target audience. The information is organised, stored and retrieved in a specific manner.

Analysis of Information
Suitable analysis techniques are selected and information is analysed. The choice of
techniques will be based on the focus of evaluation.

Reporting Information
Depending on the audience the evaluators will decide the nature of reporting. Evaluators
could use informal reporting techniques such as giving opinions, making judgements.
They could also subject the information to statistical treatment and analysis.

Information Recycled
The process of curriculum evaluation is a continuous enterprise. The information is
contirluously recycled and re-evaluated to keep it updated. This will ensure a regular
feedback for curriculum improvement. The pressures affecting school and curricula
are ever changing. Hence the curricula should be flexible for modifications and
adjustments.
The tusk of the evaluator is not merely to report the results. Alongwith this they should
communicate the interpretations, analysis and recommendations as they worli through
the various stages. Sometimes evaluators themselves are the audience and they have
to decide how to use that information and results. If they are involved with curriculum
'development they could give their recommendations to the curriculum decision makers
and ensure that these are acted upon.
We must also bear in mind the management aspect of the evaluation process. At the
very outset the management aspects should be worked out i.e. outline the various
evaluation stages with their time schedules, assign tasks to people (allocation of work)
and financial requirements per task should also be determined i.e. budgets prepared.
I
[Ornstein and Hunkins, 1988.1

17.7 CHARACTERISTICS OF EVALUATION


Some important characteristics of evaluation should be borne in mind before evaluators
actually begin the process of curriculum evaluation.

Values and Valuing


The initial step in the process of evaluation is the act of valuing or expression of
values, which the evaluator holds. For example, it has been agreed in the past that the
learning outcome at the basic education level, which is to be evaluated, is the mastery
of the three Rs. So, whenever a group initiates evaluatierfi it has definite beliefs about
what is worth evaluating. At the end of the process, value judgements are made about
the impact of the process.

Adopting Goals
In order to give the evaluation programme a definite direpion, evaluators musrdisplay
a definite orientation to goals. An educational programme could have goals ranging
from getting information, comprehension, skill development, critical thinking, analysis
etc. As soon as gods are set, methods of evaluating achievement of these goals are
also decided. This testing helps to - (i) ascertain goal clarity and attainment and (ii)
decide methods of evaluation likely to register their attainment.
1 05
Curriculum 'and its
Various Aspects
Establishing Norms .
Norms must be established for evaluation, in order to judge the quality and quantity of
educational achievement. According to Doll (1996), "Norms come in different forms
to answer different questions: (I ) Is the desired behaviour present? (2) Is the behaviour
what it should be? Considering factors such as ability, environmental circumstances,
and resources? (3) Is the behaviour socially desirable? (4) Does the past record suggest
that the behaviour is suitable for future use? (5) Does the behaviour result in the
attainment of significant and worthy ends? (6) How much behavioural change is to be
anticipated?'Norms are used to discriminate between individual students. he^ could
also be used to discriminate between curricula. Some norms do not use comparisons,
but simply permit judgement of quantity and quality of achievement in an educational .
activity.
Certain non-standard norms like criterion-referenced norms are used. They could be
used as indicators of status and change in curriculum projects. They can reveal the
overall success of the curriculum therefore norms must be incorporated.
Comprehensiveness
Though not an easy task yet evaluators should make evaluation as broad as the goals
to which it pertains. It is difficult to evaluate changes in the affective domain - attitudes
and appreciations. It could be made comprehensive using varied media.
Continuity
The process of evaluation is without exception placed at the end. This should not
mean that it features last in the educational process. Evaluation should be a continual
and ongoing process and should be carried out at every stage of the enterprise with
skill and imagination.
Diagnostic Worth and Validity
In order to be appropriate the cumculum evaluation should reflect two characteristics
-diagnostic worth and validity. Instruments of evaluation should be able to diagnose
specific aspects of the educational process and should be valid i.e. measure what they
seek to measure. In the context of curriculum evaluation validity implies the ability to
measure the effects of the cumculum on repeated occasions.
Integration of Findings
An important aspect of curriculum evaluation is the integration of the results into a
meaningful, comprehensive whole. Information left in a diverse and unintegrated state
serves no useful purpose. In order that the findings of evaluation are meaningful,
information should be organised and interpreted i.e. results should be integrated.
Progressing towards Goal Attainment
All evaluation seeks to ascertain its progress towards the goals. At the same time one
must know what, when and how progress is occurring? Doll (1996) has given the
following criteria of progress, which needs to be established -
1. "Are we really moving towards our goals? (Theme: perceptibility of movement)
2. How much movement is present? (Theme: Time and Space)
3. How fast is movement occumng? (Theme: rate)
4. What precisely can be said about directions of movement? (Theme: directed and
aberrant motion)
I

5. How does the general movement we have discovered relate to other movements
toward change or improvement? (Theme: relevance within the whole complex
of improvement)." (Doll, 1996.)
106
Curriculum Evaluation
17.8 PARTICIPANTS IN EVALUATION
' l i i Ida Taba maintained that evaluation is a cooperative activity. This cooperation is as
necessary to the process of evaluation as it is to the various activities of the total
curriculum" (Omstein and Hunkins, 1988). (This cooperativeendeavour extends to all
phases of curriculum evaluation like devising the evaluation plan, selectingthe evaluation
design, instrument and throughout the evaluation stages right from framing objectives
to integration of result and report writing). Evaluation decisions arenot made by anyone
person-student, teacher, administrator and neither about a single aspect. Decisions
are made about several aspects and require coordination among all participants -
teachers and administrators. Such a collective effort brings forth a picture of the
curriculum in totality. If teachers collaborate they can get to know the impact of the
curriculum on various types of students. If they work in isolation then decisions are
restricted to only,their group of students.
The main participants in cumculum evaluation are:

1. Evaluator
Several people play a role in the evaluation process but it is advisable to have one
person incharge. This person or the evaluator shall coordinate with the school which
administers the cumculum.
If the evaluator is a school member then it has several advantages -
(i) The evaluator is well versed with the system and its goals
(ii) Results are easily accepted since he is part of the school
(iii) It is economical since the evaluator is already on the school payroll.
However the disadvantages could be -
(i) May hesitate to issue or to deliver acritical report of the system of which he is a
part
(ii) May have other pre-occupations and hence may not devote himself or herself
totally to this enterprise.
The evaluator is basically an observer who gathers data and provides it to the decision
makers.
"In theory the evaluator serves as the eyes and the ears of the decision maker. In this
role he or she furnishes data gathered from observations about how the curriculum is
functioning in the school. It is up to the cumculum coordinators, cumculum advisory
committees and the teachers to take the data gathered, to judge their value, and then
to act upon them. The evaluator is essentially a support person to the curriculum
development and implementation efforts" (Oliva, 1988).

2. Teachers
Teachers should participate in cumculum evgluation. Usually they work alone or are
generally involved in evaluating the instructional skills in delivering the curriculum.
They should take active part by being involved in curriculum advisory committees
which undertake programme evaluation. Hence teachers can serve as valuable agents
for curriculum evaluation.

3. Committees
Cumculum evaluation/review could be done by several committees, since it is a
cooperative activity. Ornstein and Hunkins (1988) suggest that most schools "should
have a curriculum advisory committee and a special committee responsible for
evaluation policy and procedure. Its membership can be similar to that of curriculum
107
Curriculum and i t s advisory committees - that is, participants could be teachers and administrators and
Various Aspects
representatives of the lay community. Depending on the Ichool district and the
curriculum level, students might also serve on this committee".

4. Outside Experts
Schools can employ outside consultants to coordinate the evaluation enterprise.
1
Sometimes schools do not have staff trained specifically for evaluation. In such a
situation an outside person is called for evaluation. Some feel that the evaluator should
always be an outside person for he'll have no bias and will be more truthful and
objective in hisher reporting.

5. Policy Makers
Policy makers holding responsible positions in national level bodies like NCERT,CBSE,
1
State Boards of Education could also contribute to the evaluation process. Because of
the positions they occupy they are better informed about the current and future changes
in governmentpolicies, which directly or indirectly influence school cuniculum. Recently
11
the change of govemment has been responsible for changing the course content in
some subject area textbooks for eg. History, Political Science, Science. ./
Check Your Progress
Notes: a) Write your answers in the space given below.
b) Check your answers with the one giver! at the end of the unit.
5. Why must cumculum evaluation process incorporate norms?
...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
'
6. What is the advantage of having a school member as an evaluator of curriculum?
.................................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
...............................................................................................................
,

17.9 LET US SUM UP


The unit began with a discussion on the nature and purpose of evaluation and addressed
the key questions that evaluation is concerned with. Curriculum evaluation is a major
aspect of cumculum development and requires similar expertise and resources
depending on the approaches adopted. We arrived at a definition of curriculum
evaluation and discussed several models of evaluation. Cumculum is dynamic and
continuous. Considering the complexity of the process, we discussed the essential
characteristics of the process. These aspects emphasised that evaluators should bear
these in mind before launching themselves into the evaluation activity. Therast section
of the unit dealt with various rhes played by diffemnt participants of the cuniculum
I evaluation activity.
1.08
Curriculum Evaluation
17.10 UNIT-END ACTIVITLES
1. Identify some programme, course of study or unit commonly used in schools.
After stating examples of the kinds of objectives that might be used in the
programme ask the group to define as many variables as possible that might be
considered in an evaluation. For each variable, describetype of data to be collected
and discuss how it could be acquired.
2. Collect several examples of judgements about schools from local or national
media. Analyze and discuss the examples, considering such concerns as data
used to form the judgement, accuracy of opinions, opposing or conflicting data,
and relevance of nationally based judgements to local schools. ,
17.11 SUGGESTED READINGS
Cronbach, Lee J. (1982): Designing Evaluations of Educational and Social
Programs, San Francisco: Jossey-Bass.
Doll, Ronald C. (1996): Curriculum Improvement, Decision Making and Process,
London: Allyn & Bacon.
Eisner, Elliot W. (1985): The Educational Imagination on the Design and Evaluation
of School Programs, 2nd ed., New York: Mac Millan.
Gronlund, Norman E. (1985): Measurement and Evaluation in Teaching, 5th ed.,
New York: Mac Millan.
James A. Beane, Conrad F. Toepfr Jr. (1986): Curriculum Planning and
Development, London: Allyn and Bacon Inc.
John. D. McNeil. (1990): Curriculum, A Comprehensive Introduction, 4th ed. London:
Scott Foresman Little, Brown Higher Education.
Oliva, Peter F. (1 988): Developing the Curriculum, 2nd ed., London: Scott, Foresman
and Company.
Oliver, Albert 1. (1977): Curriculum Improvement: A Guide to Problems, Principles
and Process, 2nd ed., New York: Harper and Row.

r
Ornstein, A.C. and Hunkins, F.P. (1988): Curriculum, Foundations, Principles and
Issues, New Jersey: Prentice Hall.
I Popham, W.J. (1988): Educational Evaluation, NJ: Prentice Hall.
P Provus, Malcolm ( 1 971 ): Discrepancy Evaluation for Educational Program
Improvement and Assessment, California: Mc Cutchan.
Scriven, Michael (1978): The Methodology of Evaluation, in J.R. Gress and D.E.
Purpel, d s . Curriculum: An Introduction to the Field, California: Mc Cutchan.
Stake, Rebest E. (1967): The Countenance of Educational Evaluation, Teachers
College Record, pp. 523-540.
Stufflebeam, Daniel L. (1971): Educational Evaluation and Decision Making,
Itasca: Peacock, p. XXV.
Talmage, Hariet ( 1985): Evaluating the Curriculum: What, Why and How, National
Association for Secondary School Principles, May, pp. 1-8.
Tuckman, Bruce W. (1979): Evaluating Instructional Programs, Boston: Allyn and
Becon.
Tyler, Ralph W. (1942): General Statement on Evaluation. Journal of Educational
Research, pp. 492-501. 109
C u r r i c u l u ~and
~ ~ its Worthing, Blaine R. and Sanders, James R. (1973): Educational Evaluation: Theory
Various Aspects
and Practice, Ohio: Jones.

17.12 ANSWERS TO CHECK YOUR PROGRESS


1. Curriculum evaluation is a process, which determines whether the curriculum as
designed, developed and implemented is producing or can produce the desired
results. It strives to identify the strengths and weaknesses of the curriculum
before its implementation and effectiveness after its implementation.
2. The five types of value questions pertinent to evaluatingcurricula are the questions
of:
Intrinsic value
Instrumental value
Comparative value
Idealization value
Decision value I

3. i) . Scientific approach focusses on outcome or impact of cumcular experiences


on learners. This assessment is done on the basis of test scores. It is an
experimental approach. Advocates of the Humanistic approach to evaluation
believe in the naturalistic case studies which are analysed qualitatively. Data
is obtained through interviews discussions etc.
ii) Formative evaluation is an ongoing evaluation which aims at improvin$,m
existing programme based on the feedback obtained from the evaluation.
On the other hand, summative evaluation is carried out to assess theeffect
of a programme at the end.
4. n o characteristics distinguish connoiss.eurship model from others:
i) It tries to provide a qualitative description of educational life as a consequence
of a new curriculum.
ii) it reliesmore on aesthetic criticism of the adequacy of a curriculum than its
scientific validity.
5. Norms must be incorporated to -
a) judge the quantity and quality of educational achievement
' b) Indicate the degree of change and improvement in the curriculum.
6. Advantages of having insiders as evaluators are:
a) The person is well versed with the system and its goals
b) It is economical
c) It has easy acceptance by all of evaluator's resul@.

You might also like