[go: up one dir, main page]

0% found this document useful (0 votes)
59 views134 pages

HCU 802 - Module

The document outlines a course on Program Design, Monitoring, and Evaluation offered by Kenyatta University, emphasizing the importance of well-designed programs for community development. It aims to equip students with the skills to initiate, implement, monitor, and evaluate community-based programs effectively. The course covers key concepts, program cycles, and the differences between programs and projects, along with practical activities and self-test questions to reinforce learning.

Uploaded by

nanyiza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views134 pages

HCU 802 - Module

The document outlines a course on Program Design, Monitoring, and Evaluation offered by Kenyatta University, emphasizing the importance of well-designed programs for community development. It aims to equip students with the skills to initiate, implement, monitor, and evaluate community-based programs effectively. The course covers key concepts, program cycles, and the differences between programs and projects, along with practical activities and self-test questions to reinforce learning.

Uploaded by

nanyiza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 134

KENYATTA UNIVERSITY

INSTITUTE OF OPEN DISTANCE & e-LEARNING

IN COLLABORATION WITH

SCHOOL OF APPLIED HUMAN SCIENCES


DEPARTMENT: COMMUNITY RESOURCE MANAGEMENT & EXTENSION

HCU 802: PROGRAM DESIGN MONITORING AND EVALUATION

WRITTEN BY: Dr. Joan Kabaria- Muriithi & Dr. Lucy Kathuri-Ogola

Copyright © Kenyatta University, 2015


All Rights Reserved
Published By:
KENYATTA UNIVERSITY PRESS

1
INTRODUCTION

Development organizations use programs to meet community needs and create social change.
Without well-designed and well-managed programs, most organizations cannot achieve their
goals, fulfill their missions or see their visions become a reality. This course is set out to
provide an understanding to students about programs and their cycles with greater emphasis
on design, monitoring and evaluation. It is extremely important that students in the field of
community development acquire such capacities in order to start new and improve existing
programs while making use of the rather scarce human, financial, and material resources
efficiently to bring about social change. Furthermore, it is becoming difficult for community
development practitioners to find support for their initiatives; successful programs may
indicate relevance, innovation and effectiveness convincing donors, governments, private
sector and communities to invest in and sustain such ventures.

OBJECTIVE

The overall goal of this course is to equip the student with competencies to assist local
communities to initiate, implement, monitor and evaluate Programmes.

By the end of the unit, the students should be better able to:

1. Understand the basic concepts of program design, monitoring and evaluation.


2. Discuss the various components of program design in the community context.
3. Demonstrate an understanding of the program cycle and implementation
4. Explain tools and approaches used in program design, monitoring and evaluation.
5. Design a community based program.
6. Monitor and evaluate a specific community based program

2
INTRODUCTION 2
OBJECTIVE 2
LECTURE 1 6
INTRODUCTION TO PROGRAM DESIGN MONITORING AND EVALUATION 6
1.0 Introduction 7
1.1 Objectives 7
1.2 Lecture Outline 7
1.3. Definition of terms 7
1.4. Program and project differences 9
1.5 Program cycle 10
1.6 Summary 13
1.7 Activity 13
1.8 Further Reading 13
1.9 Self Test Questions 13
1.10 Glossary 14
References 15
LECTURE 2 16
PROGRAM DESIGN 16
2.0 Introduction 16
2.1 Objectives 16
2.2 Lecture Outline 16
2.3. Program design process 16
2.4 Components of a program design 20
2.5 Summary 39
2.6 Activity 39
2.7 Further Reading 40
2.8 Self Test Questions 40
2.9 Glossary 40
2.10 Answers to the Self-Test Questions 41
References 42
LECTURE 3 43
PROGRAM MONITORING 43
3.0 Introduction 43
3.1 Objectives 43
3.2 Lecture Outline 43
3.3. The concept of monitoring 43
3.4 Monitoring process 46
3.5 Benefits of monitoring 53
3.7 Summary 55
3.8 Activity 55
3.9 Further Reading 55
3.10 Self Test Questions 56
3.11 Glossary 56
3.12 Answers to the Self-Test Questions 56
References 58
LECTURE 4 59

3
4.0 Introduction 59
4.1 Objectives 59
4.2 Lecture Outline 59
4.3. The concept of evaluation 59
4.4 Evaluation process 64
4.5 Benefits of evaluating programs 71
4.6 challenges of evaluating programs 72
4.7 Summary 73
4.8 Activity 74
4.9 Further Reading 74
4.10 Self Test Questions 74
4.11 Glossary 74
4.12 Answers to the Self-Test Questions 75
References 77
LECTURE 5 78
RESULT BASED MONITORING AND EVALUATION 78
5.0 Introduction 78
5.1 Objectives 78
5.2 Lecture Outline 78
5.3. Concept of Result based Monitoring and Evaluation 79
5.4 Application of Result based Monitoring and Evaluation 82
5.5 Building a Result-Based M&E System 84
5.6 Summary 108
5.7 Activity 109
5.8 Further Reading 109
5.9 Self Test Questions 109
5.10 Glossary 110
5.11 Answers to the Self-Test Questions 110
References 111
LECTURE 6 112
EVALUATION MODELS AND TECHNIQUES 112
6.0 Introduction 112
6.1 Objectives 112
6.2 Lecture Outline 112
6.3. Evaluation Models and approaches 112
6.4 Evaluation techniques/designs 117
6.5 Summary 118
6.6 Activity 118
6.7 Further Reading 119
6.8 Self Test Questions 119
6.9 Answers to the Self-Test Questions 119
APPLICATION OF MONITORING AND EVALUATION PRINCIPLES 121
7.0 Introduction 121
7.1 Objectives 121
7.2 Lecture Outline 121
7.3. Monitoring and evaluation guiding principles 121

4
7.4 Application of Principles to M&E processes 122
7.5 Summary 124
7.6 Activity 125
7.7 Further Reading 125
7.8 Self Test Questions 125
7. 9 Answers to the Self-Test Questions 125
References 126
MODULE REFERENCES 127

5
LECTURE 1
INTRODUCTION TO PROGRAM DESIGN MONITORING AND EVALUATION

1.0 Introduction
Welcome to the first lecture of this course, we shall define and understand key terms related to
program design, monitoring and evaluation. In addition, we shall find out the difference between
a project and a program. Lastly we shall explore the cycle of a program.

1.1 Objectives

By the end of this topic you should be able to;



 Define key terminologies related to program design, monitoring and
evaluation
 Differentiate between a project and a program
 Analyze the program cycle

1.2 Lecture Outline

 Key definitions related to program design, M&E


 Differences between project and program
 Program cycle

1.3. Definition of terms

1) Program – a time-bound intervention similar to a project but which may cut across
sectors, themes or geographic areas, involving multiple institutions, and may be
supported by several different funding sources.

6
Programmes can exist for a specific period of time or for many years e.g. curriculum,
HIV/AIDS.

2) Program design – Is the overall plan/strategy of a programme. Includes all the necessary
components involved in thinking through all the processes of the programme.

3) Monitoring – A continuous management function that aims at providing programme


managers and key stakeholders with regular feedback and early indications of progress or
lack thereof in the achievement of intended results.

4) Evaluation – A time-bound exercise that attempts to assess systematically and


objectively the relevance, performance and success, or the lack thereof, of ongoing and
completed programmes.

5) Stakeholders – People, groups or entities that have a role and interest in the aims and
implementation of a programme. Can affect or be affected by the programme.

6) Target group - The main stakeholders of a programme that are expected to gain from
results of programme. Also referred to as sectors of the population that a programme
aims to reach in order to address their needs.

7) Beneficiaries - Individuals, groups or entities whose situation is supposed to improve;


the target group and others whose situation may improve as a result of the development
programme.
8) Program management - is the process of planning, organizing, implementing and
managing resources to bring about the successful completion of program goals and
objectives.

7
1.4. Program and project differences

Although the words program and project tend to be used interchangeably, there are differences.
A program is the more comprehensive of the two, involving a series of interlocking activities or
projects for the achievement of key organizational goals or strategic objectives.
 Programs tend to be longer term, such as an organization’s commitment to providing
services to adolescents, or its interest in quality health care service delivery, or its
advocacy for more broad-based participation in political or governmental affairs.
 A programme is usually divided into smaller units called projects; forms an umbrella
structure to manage related projects.
 E.g. In a Water and sanitation programme the following can be projects
 Construction of wells/dams
 Health education
 Training of skilled personnel
 Mapping and research on water scarce areas

A project is a more discrete undertaking, usually focused on some aspect of the overall program
(e.g., implementing or expanding community-based reproductive health (RH) services or
developing an initiative to train teachers to provide accurate RH information).
 Development projects are often the constituent activities of programmes

NOTE

A program is only as good as its constituent parts, and a project is only successful if it
contributes to dynamic growth and progress. Characteristics of well-designed, well-
managed, and effective programs and projects are generally the same.

8
1.5 Program cycle

Every programme has to follow a series of phases, allowing the process to be guided from the
moment the problem is identified until it is solved. This series of phases is known as the
programme cycle

Program cycle steps


a. Needs Assessment/situational analysis
This is a process of establishing the needs, issues of concern or problem/s of the people to benefit
from a programme? It is done to:
 Establish a need
 Provide information
 Identify gap/problem
 Resource availability
 Decision making

b. Planning & Design


This involves planning the specific programme that will respond to the needs/issue identified in
assessment.
It provides the structure of what has to be achieved, how it is to be implemented and how
progress will be verified. It is a crucial phase as its quality will influence the subsequent phases
in the cycle.
At this stage one;
 Develop goals, objectives & targets
 Plan activities to achieve goals/objectives
 Develop p. guide, budget & management plan
 Decision making (products, staff, space)

9
c. Implementation

10
This refers to execution of the program according to the plan

It involves:
 Actualization of plans/design prepared
 Acquisition & use resources
 Management in place & takes action
 Combination of resources to make plan work

d. Monitoring
It refers to the systematic and continuous data collection on program processes, activities and
results. Monitoring is important because it:
 Identifies program strengths & weaknesses
 Recommends corrective measures
 Provides regular feedback (stakeholders) on progress
 Gives information to make decisions on program
 Ensures work stays on course

11
e. Review & Evaluation
This stage compares outcomes to previously outlined program goals & targets. It determines
relevance, validity of design, efficiency, effectiveness, and sustainability of a programme.

f. Impact Assessment
It focuses on determining sustained changes in people’s lives as a result of implementing
programme, for example:
Assess significant/long term effects on population
Effects/changes may be intended/unintended; direct/indirect
Important to track change process

12
PROGRAM CYCLE ILLUSTRATED

1.6 Summary

13
In this topic, we have looked at the basic but important terms related to program
design, monitoring and evaluation. We have also learnt the distinction between a
project and a program and explored the program cycle. With such an understanding, we
are now ready to concentrate on program design, this forms our next lecture.

1.7 Activity

1. Carefully analyze development initiatives in your village and categorize them into
programs and projects.

1.8 Further Reading

 Read more on the program cycle

1.9 Self Test Questions

 Define monitoring and evaluation


 In which way does a program differ from a project?

14
1.10 Glossary

Program – a time-bound intervention similar to a project but which may cut across sectors,
themes or geographic areas, involving multiple institutions, and may be supported by several
different funding sources.

Program design – the overall plan/strategy of a programme

Monitoring – a continuous management function that aims at providing programme managers


and key stakeholders with regular feedback and early indications of progress or lack thereof in
the achievement of intended results.

Evaluation – a time-bound exercise that attempts to assess systematically and objectively the
relevance, performance and success, or the lack thereof, of ongoing and completed Programmes.

Stakeholders – people, groups or entities that have a role and interest in the aims and
implementation of a programme. Can affect or be affected by the programme.

Target group - main stakeholders of a programme that are expected to gain from results of a
programme

Beneficiaries - individuals, groups or entities whose situation is supposed to improve; the target
group and others whose situation may improve as a result of the development programme.

15
1.11 Answers to the Self-Test Questions

1. In which way does a program differ from a project?

Program Project
More comprehensive and broad Less comprehensive and more focused
Tend to be longer Tend to be shorter
Divided into small units called projects Divided into small units called activities

References

1. Chikati, J. (2009). Monitoring and Evaluation Handbook. Nairobi. Regional


Partnership for Resource Development Publication..
2. Community Places. (2014). Community Planning toolkit: community engagement. United
States. Community Places.
3. International Federation of Red Cross and Red Crescent Societies. (2011).
Project/programme monitoring and evaluation (M&E) guide. Geneva. IFRC
& RCS.
4. International Labour Organisation (2010). Project Design Manual: A step-by-step tool
to support the development of cooperatives and other forms of self-help
organizations. Accessed from www.ilo.org/coop
5. Pathfinder International. (2009). Program Design and Management. United States.
Pathfinder.

16
LECTURE 2
PROGRAM DESIGN

2.0 Introduction

The following lesson will introduce you to program design. Specifically, we shall learn the
process of designing a program and explore the components of a program design.

2.1 Objectives

By the end of this topic you should be able to;


 Discuss why program designing is the most import stage in the program
cycle
 Design a program
2.2
Lecture Outline

 Program design process


 Components of a program design

2.3. Program design process


Each program must be developed in response to documented needs or problems in the
community, region, or for the group to be served. Therefore, program justification depends on
demonstrating a clear understanding of the problem and preparing a coherent, appropriate,
relatively detailed set of proposed solutions.

17
2.3.1 Identifying community problems/needs
The first step in the design phase is the identification the problem/need available. The
methodology used is called the situation analysis or needs assessment.
Every project aims to help solve a problem that affects the target group or groups. The problem
analysis identifies the negative aspects of an existing situation and establishes the “cause and
effect” relationships among the problems that exist.

The process involves:


 Identifying all major problems uncovered in your assessments, community diagnoses,
surveys, or other data gathering activities. Examples, lack of health facilities, illiteracy,
high rates of drug abuse, insecurity, unemployment, poverty, lack of water & sanitation.
 Rank each problem (Replace A, B, C in the chart below with the actual problem
identified during assessments, e.g., A = poverty, B = lack of access, C = myths and
misconceptions) against each criterion, using a numerical scale (1-3 or 1-5, with the
higher numbers indicating a high degree of-or strong-linkages to criteria). Add the
rankings for each problem; those with the highest scores are those you may wish to
consider giving priority in the project.
Criterion Problem A Problem B Problem C

Severity
Extent
Within our control
Feasibility
Possibility of simple solution
Relationship to organizational policy/mission
Relationship to government advocacy
Long-range impact
Organizational image/influence

Adapted from CAFS: Family Planning Programme Management, Module VI, p.36

18
NOTE

Several problems may have similar totals despite very different ratings for the same criteria. And the
problem with the highest score may simply not feel like the highest priority. It may be necessary to
rank the criteria in order of priority, to decide which are the most important and give extra weight to
them. This exercise can be used to air and elaborate different views or reach consensus.

Specific problem(s) to be addressed by the program should be clearly articulated and supported,
whenever possible, by some baseline data. You should be able to:
 Describe the extent, scope, or severity of the problem, in order to put proposed results
in perspective.
 Describe the geographic area and demographic characteristics of the population
 Make an analysis of the causes of the problem
 Describe results of previous efforts to solve the problems, by your or other
organization(s). Identify intended beneficiaries, stakeholders, and principal actors
(especially an organization’s staff), or interested agencies.

2.3.2 Finding solution/s to identified needs/problems


Based on the situation analysis, the solution/s chosen should be so clearly linked to
problems/needs, this way; it makes it easier to justify your program. Problems may be effectively
addressed in a variety of ways, choose the best strategy considering the impact, costs and
acceptability by the target group. Those aspects of your approach which you think best address
the problem should be emphasized.
For example, if the need is health services, then one possible solution may be to establish health
centres that can offer affordable and quality treatment to community members.
Your recommended solutions should be detailed enough to provide answers for the following
questions:
 What approach/es will you use and why have you chosen this approach or approaches
over other possibilities?

19
 What other local organizations and donors are addressing this problem and how does the
proposed approach complement their activities? If applicable, how will you coordinate
with these other organizations?
 How will target groups or communities specifically benefit, i.e., how will they be “better
off” because of the program or project?
 What innovations can be expected? What tangible deliverables?
 How can it be replicated in other areas?
 What changes do you expect will result from this project?
 What sources of support are likely to be available to you for continuation of the program
for the longer term?

2.3.3 Formulating the program


Once a needs assessment has been carried out to determine the needs or problems in the
community and possible solutions have been identified and the best chosen, then it is time to
design or formulate the program.
Program formulation involves making detailed arrangements for the technical and operational
aspects of program or solution we intent to carry out such as the costing, financing and
scheduling of program activities.
The identification of goals and objectives and the definition of inputs, activities, outputs and
outcomes during the program design phase guide implementation and inform the assessment of
program performance in terms of the efficiency and effectiveness of program delivery, the
achievement of program objectives and impacts.
Program design methods generally involve the use of logical framework to link resources and
actions required to implement program activities to their direct outputs, their flow-on effects and
their eventual contribution to the overall program goal.

20
2.4 Components of a program design

The logical framework is a way of presenting the substance of the program in a comprehensive
and understandable form. It is used to organize all the main elements including the objectives,
outputs, activities, indicators and assumptions.
Log frames provide a structured, logical approach to setting priorities and determining the
intended results and activities of a program. It can also provide the basis for evaluating the
effectiveness, efficiency and relevance of a program.

2.4.1 Typical structure of a log frame


The log frame usually consists of a matrix with four columns and four rows which summarise the
program structure

Project structure Indicators Means of Key assumptions


Verification
Development
objective
Immediate Objective

Outputs

Activities

Logical Framework

 The first column is the program’s hierarchy of objectives. It identifies what the program
wants to achieve and how, and clarifies the causal relationships. Program objectives are
achieved by providing the direct recipients with certain outputs. Outputs are produced by
a set of activities. Outputs and activities are the means by which to obtain the objectives.
Therefore they are defined after the project objectives have been set out.

21
 The second and third column list indicators and the means of verification of impact, and
the knowledge and sources required to assess the reliability of data. In other words, the
indicators provide evidence of the program’s progress toward the intended objectives.
Hence they are the core components of the program monitoring and evaluation system,
which enables the implementing agency to make the necessary adjustments throughout
the implementation, as well as to demonstrate the program’s progress (or lack of it) to the
stakeholders, donors and other partners. Once the indicators have been decided, the
means of verification provide precise reference to the sources of information to be
consulted in order to verify the program’s performance and results. Indicators can be
quantitative (number of members, percentage of women participating in board meetings,
etc.) or qualitative (customer satisfaction, quality of services, etc.) but all of them have to
be assessable.
 The fourth column specifies important assumptions and uncertainties beyond the control
of the program. The context in which the program operates plays an essential role in its
success. Factors beyond the program’s control may affect the achievement of the outputs
(e.g. a major unexpected flood or drought can provoke substantial crop failures,
regardless of the successful implementation of a rural development project). Such events
or conditions are identified in the assumption analysis and incorporated into the program
design. If assumptions do not prove valid (the rainy season is regular, as expected), the
program is unable to proceed. Sometimes this column contains “risks”, namely factors
that may compromise the success of the project, and therefore need to be taken into
account.

22
Project structure Indicators Means of Key assumptions
Verification
Development What are the What are the What are the
objective quantitative or quantitative or quantitative or
What is intended to be qualitative indicators qualitative indicators qualitative indicators
the longer-term impact by which the by which the by which the
of the project on the achievement of the achievement of the achievement of the
ultimate beneficiaries? development development development objective
objective can be objective can be can be measured?
measured? Please measured? Please Please note that
note that indicators note that indicators indicators for the
for the development for the development development objective
objective are often objective are often are often beyond the
beyond the control of beyond the control control of the project.
the project. of the project.

Immediate Objective What are the What are the What are the
quantitative or quantitative or quantitative or
What are the intended qualitative indicators qualitative indicators qualitative indicators
benefits (the desired by which the by which the by which the
situation) and outcomes achievement of the achievement of the achievement of the
of the project for the immediate objectives immediate immediate objectives
target group? can be measured? objectives can be can be measured?
measured?
Outputs What are the What are the What are the
What are the tangible quantitative or quantitative or quantitative or
products or services qualitative indicators qualitative indicators qualitative indicators
delivered by the project by which the by which the by which the
to achieve the achievement of achievement of achievement of outputs
immediate objectives? outputs can be outputs can be can be measured?
measured? measured?

23
Activities Inputs (raw materials, Inputs (raw
What activities must be equipment, human materials,
carried out to generate resources, etc.) equipment, human
each intended output? resources, etc.)

The vertical logic:


This matrix is called a “logical framework matrix” because it follows both vertical and horizontal
logic. The first column represents the vertical logic of the log frame because it clarifies vertically
the causal relationships between the different levels of “objectives”, as illustrated in the
following diagram.

Project structure Indicators Means of Key assumptions


Verification
Development
objective (Goal)
Immediate
Objective

Outputs

Activities Inputs & resources

NOTE

If you want to contribute to the development objective, THEN you must achieve the immediate
objective
If you want to achieve the immediate objective, THEN you must deliver the listed outputs
If you want to deliver the outputs, THEN the listed activities must be implemented

24
The horizontal logic:
This states how the activity objectives specified in the vertical logic will be measured (column 2,
indicators), how the means will be verified (column 3, means of verification) and the
assumptions that are beyond the control of the program manager.

Project structure Indicators Means of Key assumptions


Verification
Development
objective (Goal)
Immediate
Objective

Outputs

Activities

What are the quantitative or qualitative indicators by which the achievement of this development
objective can be measured?
What are the quantitative or qualitative indicators by which the achievement of the immediate
objectives can be measured?
What are the quantitative or qualitative indicators by which the achievement of outputs can be
measured?
What are the inputs and resources needed to implement the activities?

25
Setting the objectives, outputs and activities

The development objective (Goal)


This explains what the project does for the final beneficiaries in the longer term. Depending on
the action domain of the target group (from a ministry to a local self-help group), it also shows
how the project contributes to international development priorities, such as the Millennium
Development Goals (MDGs) and national development policies (Vision 2030 etc.). The
development objective can also describe how the project contributes to local development
strategies or to the strategic plan of an organization. The development objective will not be
achieved by your program alone. The program is just one piece in a complex puzzle. The
development objective corresponds to the overall picture, to which various development
programs, strategic plans and business plans all contribute.
In short, a development objective has to:
 Be consistent with the strategic development policy (at the level of the business plan or
other overarching development strategies)
 Avoid being a restatement with other words of the immediate objectives
 Be expressed as a desired end and not as a means
 Be a long-term objective to which the project will contribute
 Be a long-term objective for the ultimate beneficiaries
 Be stated clearly in verifiable terms.

The immediate objective


Your program is responsible for its achievement. It should address the core problem and be set
out in terms of sustainable benefits for the target group. We suggest that you only have one
immediate objective per program, in order to avoid excessive complexity. Multiple immediate
objectives are used by large organizations with a wide domain of action, such as a ministry or an
international development agency.
An immediate objective describes the desired program outcome for the target group (direct
recipients). In general, these are desirable changes for the target group, for example:

26
In behaviour, community members follow good safety and health practice on HIV/ AIDS
prevention and mitigation and in a system or service; community members make increased use
of renewable energy.
In short, an immediate objective has to:
 Contribute to achieving the development objective
 Avoid being a restatement of the outputs with other words
 Be expressed as a future completed action, an end state, and not a process.
 Be a medium-term objective to be achieved by the end of the project for the target group
 Be stated clearly in verifiable terms.

The outputs
These are the products of the activities, the combination of which leads to the achievement of the
immediate objective. The implementing organization is accountable for delivering these services
and products. Some examples of outputs are: Management capacity of young entrepreneurs in
the community is reinforced; irrigation system is upgraded and expanded to the neighbouring
districts.
In short, an output has to be:
 Delivered by the project
 Necessary to achieve the immediate objective
 Demand-driven and not supply-led
 Stated clearly in verifiable terms
 Feasible with the available budget

The activities
These are the actions and means that will produce the outputs. In most cases, they are related to:
training, equipment, institutional support, planning, studies, etc.
Activities cannot always be taken directly from the objective tree. In many cases, they will have
to be defined during the formulation step, as it is quite rare that a problem and objective tree
exercise allows such detail of analysis. In terms of ownership of the project, it is therefore
advisable to submit the log frame again to the key stakeholders and partners. This will make sure
that the described activities are based on consensus.

27
In short, an activity has to:
 Define the action strategy of the program
 Be realistic in terms of inputs, resource needs and managerial capacity
 Be stated clearly in verifiable terms

Example of goal, objective, output and activities


Project structure Indicators Means of Key
Verification assumptions
Development objective (Goal)
To contribute to reduced maternal mortality rate in
Garissa County, Kenya
Immediate Objective
1. To increase access to health care facilities for
pregnant women by 2015
2. To improve the quality of home delivery
services to pregnant women by 2015
Outputs
Objective 1
i. 100 women seek maternal health services every
month
ii. 1 outreach event conducted every month
Objective 2
i. 25 trained birth attendants every month
ii. 50 delivery packages supplied to birth attendants
every month
1. Activities
2. Provision of mobile clinics
3. Training pregnant women on importance of prenatal
health services
4. Planning of outreach events
5. Networking with other organizations in carrying out
outreaches

28
6. Recruiting community birth attendants
7. Training of community birth attendants
8. Identifying required equipment for delivery

Setting up the indicators and means of verification

Indicators
You will only know if your objectives and outputs have been achieved by setting indicators that
are measurable. An indicator is an objective measure that indicates if and to what extent progress
(in relation to the program’s objective and outputs) is being achieved. Indicators of achievement
are usually required at output level. Indicators for activities are not developed since it is
considered that it is a straightforward action that you have or have not implemented.

NOTE

Indicators of achievement measure change brought about by the program. They can be
broken down into targets and milestones. Targets define the desired result that the project
intends to reach. Milestones give information on whether the project is on track in
achieving the targets. Where possible, targets and milestones should enable measuring
change for women and men separately

Type of indicators
Indicators can be quantitative or qualitative:
Quantitative indicators use numerical data, (such as numbers of people or percentages) to
indicate progress. They can be specified through a target or milestone or both, depending on the
donor requirements. See the example below:
 Indicator at the level of objective - percentage of tomato production increase by
community farmers
 Target at output level - increase of tomato production by 10% after 6 months

29
 Milestone at output level - increase of tomato production by 3% after 3 months. Increase
of tomato production by 6% after 5 months. Increase of tomato production by 10% (3%
+ 6% + 1%) after 6 months (the achievement of the target)

Qualitative indicators use data based on attributes or qualities, (such as perceptions) to indicate
progress.
For example:
Indicator - level of local farmer’s satisfaction
Indicator of achievement - two thirds of the local farmers are satisfied with the quality of the
tomatoes after 12 months

Direct and indirect Indicators


Direct indicators have a direct relationship to the objective or output. Direct indicators are
preferred because they are very specific and relevant, and we recommend that you use them as
often as possible. In some cases, they may be costly to measure (for example, data on household
income require expensive statistical surveys and a good baseline). An example of this type of
(achievement) indicator is:

60% of women and young community members live above the poverty line at the end of the
program
Indirect indicators (proxies) measure variables that are associated with a situation that fluctuates
in the same direction as the objective. The stakeholders can propose proxies better because they
are more familiar with the habits of the ultimate beneficiaries. But indirect indicators are less
specific, because external factors other than the objective they try to measure may interfere and
give an incorrect reading of the indicator.

Consumption of tomatoes has increased by 5% in district X

Indicators at the level of development objective


At the development objective level, indicators should be linked to the ultimate beneficiaries as
well as to higher-level national frameworks (national strategies, SDG’s etc.).

30
Since the program only contributes to the achievement of the development objective, it is very
difficult to monitor and assess indicators at that level. Furthermore, there are long-term impacts,
which appear after the project. This is why, in some cases, a post-project evaluation could be
required five years after completion.
Indicators for development objectives are not always required. If they are, we recommend that
you use indicators devised by national organizations, such as the employment rate in the region,
and lobby to make sure that they are disaggregated by production type (employment in the
tomato sector), sex and age.

Indicators at the level of immediate objective


The immediate objective brings a change in behaviour, services or institutions for the direct
recipients (target group). It is a consequence of the outputs. Therefore the immediate objective’s
indicators are important and have to describe the project benefits and expected value.
Example:
Immediate objective: Members’ income particularly that of women and young farmers, increased
through the improvement of the cooperative’s governance and business performance
Indicator: The percentage of income increase of the beneficiaries, including 30% women.

Indicators at the level of outputs


At the output level, the indicators are mainly for use by the program management, because they
refer to what the project delivers. For example, indicators at the level of output should not be a
summary of what has been stated at the activity level, but should describe the measurable
consequence of activity implementation.
For example:
Output: Long-life milk and quality dairy products produced
Indicator: Customers are satisfied with the quality of the cooperative dairy products.

Means of Verification (MOV)


The means of verification should be considered and specified at the same time as the formulation
of indicators. This will help to test whether or not the indicators can be realistically measured
with a reasonable amount of time, money and effort.

31
The means of verification should specify:
 HOW the information should be collected (e.g. from administrative records, special
studies, sample surveys, observation,) and/or the available documented source (e.g.
progress reports, program accounts, official statistics, engineering completion
certificates).
 WHO should collect/provide the information (e.g. local government workers, contracted
survey teams, the district agricultural office, and the program management team).
 WHEN/HOW information should be collected (e.g. monthly, quarterly, and annually).

It is important to make sure that the required information can be collected through existing
systems or at least with improvements to existing systems, and with the available resources. In
some cases, additional resources might be needed to develop a survey or database. This should
then be added to the project budget.

Setting up the key assumptions


Key assumptions are conditions that have to be met if the project is to succeed. They are
included in the fourth column of the log frame. In other words, they are the answer to the
question “what external factors are not controlled by the project, but may affect its
implementation and long-term sustainability?”

It is frequent to refer also to the concept of risk, together with the key assumptions. The
difference between risks and assumptions lies in their either negative or positive statements. For
a project to be successful, risks should not occur, while key assumptions need to be correct.

To identify assumptions, you have to assess the probability and significance of external
conditions to be met in order to achieve your objectives and outputs. They are not addressed at
the level of activities, since you are supposed to control their implementation totally under the
project management.

The fourth column in the matrix is used to highlight assumptions about the external conditions
that need to be fulfilled if the vertical logic of the project structure (objectives and outputs) is to

32
hold true. This same column highlights those risks that, although they are not likely to happen,
may affect either the progress or the success of the project.

Types of assumptions

Implementation assumptions
These assumptions link the immediate objectives of the program to the outputs. Since the
program’s management is not able to control what happens with program outputs, there are
necessarily important assumptions at this level. These assumptions are critical to the success of
the program and form an important part of any evaluation.
For instance, the local government agrees to develop and implement an infrastructure
development plan that will benefit the community.

Development assumptions
These assumptions link immediate objectives to the development goal. The question being asked
is: How is the achievement of the immediate objective going to contribute to national goals?
These are often stated in the form of hypotheses or theories. They are important for the appraisal
and evaluation of the project, but are not usually related to implementation of activities.
For instance, stable global economic trends.

Sustainability assumptions
These assumptions relate to the sustainability of the development and immediate objectives
For instance, the government agrees to reform the agricultural law.

Example of a Log Frame for a Health Program

Project structure Indicators Means of Key assumptions


Verification
Goal Reduced maternal Data collected in No conflict
mortality rate of 5% health facilities by during project
To contribute to reduced for 1,200 pregnant programme staff at duration
maternal mortality rate in women between Jan-

33
Garissa County, Kenya Dec 2015 in Garissa the end of the year.
County
Immediate objective 10 mobile clinics Photographs of Readily available
1. To increase access strategically set up. mobile clinics taken support and
to health care 2 home visits by by programme team cooperation from
facilities for birth attendants to every month. government
pregnant women by pregnant women
2015 every month.

Outputs 1,200 pregnant Health records Availability of


i. 100 women seek women to seek obtained by adequate
maternal health maternal health programme team on a resources
services every services. monthly basis.
month 12 outreaches Monthly outreach
ii. 1 outreach event conducted reports written by
conducted every project team.
month

Activities Target group


i. Provision of mobile Inputs (pregnant
clinics Funds women) will
ii. Training pregnant Human resource actively
women on Equipment & participate in
importance of materials programme
prenatal health Information activities
services
iii. Planning of
outreach events
iv. Networking with
other organizations

34
in carrying out
outreaches

35
2.4.2 Work Plan

Once program activities have been identified and agreed upon in the log frame, the program
manager and staff have to ensure that these activities are adequately planned, for proper
execution.
Work plans typically list tasks required, who will be responsible for each task, when each task
will need to be undertaken, and the amounts of human and other resources required to complete
each task. An accompanying budget, ideally organized both by objectives and activities and
against standard accountancy budget lines, ensures all costs have been accounted for, and
supports future financial management of the project.
The project work plan is essentially a calendar or schedule that links the tasks to be done to the
resources needed to do them.

The work plan identifies:


 What specific tasks are required?
 Who will be responsible for helping to complete each task?
 When each task will be undertaken and the sequence of linked tasks
 How much each task will cost and how it will be funded

Importance of work plans


1. Ensure all the essential tasks in the project are planned and reduce the chance of
overlooking an essential step in completing the project
2. Allocate tasks efficiently to individuals (all tasks allocated with no duplication)
3. Be aware of short-term priorities and how individual performance will be assessed
4. Establish a project schedule that can be tracked and monitored
5. Set expectations for project progress and establish accountability
6. Analyse problem areas more effectively
7. Develop a more accurate budget and ensure adequate funding and other resources are
available

36
Developing a work plan

a. Identify Specific Tasks That Need to Be Done


Developing a work plan starts by reviewing the various activities that you identified in your
general action plan and determining which of these need to be implemented over the current
planning period. These can be compiled in a work breakdown structure.

For each activity, you will need to break it down into specific tasks that will need to be
completed to accomplish the activity. A task is a specific action in a work plan required to
implement activities or other components of a strategic plan. It is a discrete package of work that
is assigned to specific individuals to complete over a relatively short time frame.

In many cases, breaking down an activity into its component pieces is a relatively simple
process, especially if you have an experienced team and the activities are relatively
straightforward and well understood. In some cases, however, where the work is more complex
or new to the group, this exercise may require creative thinking such as brainstorming a range of
possible tasks to accomplish a specific activity and then evaluating these possibilities to see
which makes the most sense.

Importantly, keep this process simple. The key here is to understand the work that you need to
do to carry out your strategies in the short run, in enough detail for you and your team to know
who is responsible, over what timeframe, and with enough clarity so you can monitor progress.

37
b. Define Who Will Be Responsible for Each Task
As you develop each task, it is also important to define who will be responsible for it across your
program team members, consultants, and partners.
The following factors should be considered when defining responsibilities for a task:
 Skills and knowledge required for the task
 Availability of individual – does the person have the time to do the work?
 Individual’s interest and motivation in carrying out the task
 Organizational structure foreseen for the whole project
 Level of authority or positional power required for the task
 Natural groupings of tasks

c. Determine When Each Task Will Take Place


For each task, you should estimate a start and end date and the total time required to complete
the task (e.g. number of days). The accuracy of a task’s time estimate usually depends on
whether you have done similar work in the past. Where this experience is lacking, sometimes
you just need to accept this uncertainty and get on with the task. However if you are very
concerned you may be able to improve the quality of your estimate by breaking large or complex
tasks down further into sub tasks and then summing the time for the sub tasks. It is important to
make the program schedule realistic and take into account everything from dependencies
between tasks to holidays to other activities that project staff may have to do.

Estimating task duration is probably one of the most challenging aspects of project work
planning. Task duration is rarely overestimated, and is frequently underestimated. The
estimation process is complex because activity duration can be impacted by many variables such
as staff availability, funding availability, technical capacity, unexpected events, individual
productivity, and problems during the implementation of a project.

38
When estimating the duration of a task, past experience is your greatest advantage. When
building a project schedule, program managers should take into account staff absences, holidays,
meetings, and related issues. No staff member is 100 percent productive every hour of the
workday. If a scheduled task assumes 100 percent productivity, the schedule rapidly falls apart.
A successful schedule builds these types of factors into the duration estimates. An accepted rule
of thumb for estimating effort is that staff typically spends two-thirds of their time on productive
tasks, while the remaining time is taken up with meetings, unplanned work, or other distractions.

A common technique to estimate task duration is to base estimates on a similar scope of work.
Past project data can be used successfully to improve estimates and therefore program
performance on deliverables. If historical data does not exist, seek the advice of others who have
completed similar tasks. It is also good practice to consult the staff who will be assigned the
work for their estimates.

Using a Gantt chart will help estimate the start and end time of tasks and activities and
eventually the whole program.

d. Estimate Costs for Each Task

39
As you develop each task, you should also estimate the monetary cost of completing the task as
well as describe any other required resources. There are essentially four major types of costs
associated with any activity:
Labour, Materials, Other direct costs (travel, telephone etc.), Indirect costs (i.e. overheads –
office rental, utilities, and administrative costs)

It is important to identify who is responsible and estimate how long each activity will take before
estimating the financial cost. You need to judge on a per program basis how accurately you need
to identify and allocate costs at the task level. Usually it is useful to have reasonable estimates in
place to help you produce budgets, but don’t make it a long exercise.

A work plan
Activities Time frame Resources Person responsible

40
Budgeting
Once you have developed a work plan for your project with expected costs, the next step is to
prepare a budget that reflects the costs and resource needs for carrying out the project over the
same time period. As noted with work plans, the time period for a detailed budget is often
annual and coincides with your programme or national office’s fiscal year and regular planning
and budgeting cycle. However for multi-year projects, budget estimates for subsequent years are
made, based on an extrapolation of the first year budget. This enables multi-year contracts to be
agreed and supports longer-term planning. Work plans and budgets can be developed for a
shorter time frame as well, depending on your specific needs.

Why Budgets Are Important


Preparing a detailed budget for your program may seem tedious for very busy field managers and
staff, but it is among the most important steps you need to take to ensure full and successful
implementation of your project. An accurate budget will help you and your team:
 Understand the full costs of your project in the work plan timeframe;
 Ensure that your team understands the demand for resources and has access to all needed
resources for completion of the project;
 Provide a basis for proposal development and fund raising;
 Understand any gaps between expected expenditures and needed income, in enough time
to raise additional resources or take corrective action;
 Create a baseline against which your team can monitor performance and results; and
 Increase transparency for your donors and other stakeholders.

Developing a budget involves going through the following steps.


1) Confirm the timeframe and format for the budget
2) List, cost and group the resources needed
3) Include cost recovery
4) Balance the budget
5) Review and revise the budget on a regular basis

41
2.5 Summary

In this topic, we have discussed how a program can be designed. We have also extensively
looked at major program design components like the log frame and the work plan. In our
next lecture, we shall look at monitoring of the designed program.

1. Using the Log frame, design a program that suits your local community? Use the program
design process learnt in this lesson as a guide.

2.6 Activity

2.7 Further Reading

 Explore further on the use of work plans and Gant charts in development
programs
 Analyze different programs Non-profit organizations are engaging in, in Kenya

2.8 Self Test Questions

 As a manager in charge of a program, how can you ensure that activities have been

42
properly planned for?
 Differentiate between a work break down structure and a Gantt chart
 Why should programs have budgets?

2.9 Glossary

1. A task is a specific action in a work plan required to implement activities or other


components of a strategic plan.
2. A logical Framework is a way of structuring the main elements in a project and
highlighting the logical linkages between them.
3. A work plan is a document used to organize a program. It outlines how the organization
plans to complete a quality program within a given amount of time and in compliance
with a set budget.

2.10 Answers to the Self-Test Questions

1. As a manager in charge of a program, how can you ensure that activities have been
properly planned for?

Use the Work plan to identify:

 What specific tasks are required?


 Who will be responsible for helping to complete each task?
 When each task will be undertaken and the sequence of linked tasks
 How much each task will cost and how it will be funded

2. Differentiate between a Work Break down Structure and a Gantt chart

 A work Breakdown Structure is tool used to define and group a program's tasks in a
way that helps organize and define the total work scope of the program.

43
 A Gantt chart a type of bar chart that illustrates a project schedule

3. Why should programs have budgets?


 Understand the full costs of your project in the work plan timeframe;
 Ensure that your team understands the demand for resources and has access to all needed
resources for completion of the project;
 Provide a basis for proposal development and fund raising;
 Understand any gaps between expected expenditures and needed income, in enough time
to raise additional resources or take corrective action;
 Create a baseline against which your team can monitor performance and results; and
 Increase transparency for your donors and other stakeholders.

References

1. International Labour Organisation (2010). Project Design Manual: A step-by-step tool


to support the development of cooperatives and other forms of self-help
organizations. Accessed from www.ilo.org/coop
2. Levin, G. (2012). Program Management: A Life Cycle Approach. London. Taylor &
Francis.
3. Mikov, M., Beale, W. and Tua, J. (2007). Work plans and Budgets. Australia. WWF.
Retrieved https://intranet.panda.org/documents/folder.cfm?uFolderID=60981
4. Pathfinder International. (2009). Program Design and Management. United States.
Pathfinder.
5. Thiry, M. (2012). Program Management: Fundamentals of Project Management.
England. Gower Publishing Limited.

44
LECTURE 3

PROGRAM MONITORING

3.0 Introduction
Welcome to the third lecture of this course. We shall now learn how we can monitor a
development program that has been designed and implemented.

3.1 Objectives

By the end of this topic you should be able to;


 Understand monitoring in the context of development programs
 Monitor development programs

45
3.2 Lecture Outline

 The concept of monitoring


 Monitoring process
 Benefits of monitoring
 Challenges encountered in monitoring

3.3. The concept of monitoring


Monitoring can be defined as a continuing function that aims primarily to provide the
management and main stakeholders of an ongoing intervention with early indications of
progress, or lack thereof, in the achievement of results. An ongoing intervention might be a
project, program or other kind of support to an outcome. Monitoring helps organizations track
achievements by a regular collection of information to assist timely decision making, ensure
accountability, and provide the basis for evaluation and learning. It aims at providing regular
oversight of the implementation of an action in terms of input delivery, work schedules, targeted
outputs etc. Monitoring actions must be undertaken throughout the lifetime of the program.

Types of monitoring

a. Results monitoring
It tracks effects and impacts. This is where monitoring merges with evaluation to determine if the
project/programme is on target towards its intended results (outputs, outcomes, impact) and
whether there may be any unintended impact (positive or negative). For example, a psychosocial
project may monitor that its community activities achieve the outputs that contribute to
community resilience and ability to recover from a disaster.

46
b. Process (activity) monitoring
It tracks the use of inputs and resources, the progress of activities and the delivery of outputs. It
examines how activities are delivered – the efficiency in time and resources. It is often conducted
in conjunction with compliance monitoring and feeds into the evaluation of impact. For example,
a water and sanitation project may monitor that targeted households receive septic systems
according to schedule.

c. Compliance monitoring
It ensures compliance with donor regulations and expected results, grant and contract
requirements, local governmental regulations and laws, and ethical standards. For example, a
shelter project may monitor that shelters adhere to agreed national and international safety
standards in construction.

d. Context (situation) monitoring


It tracks the setting in which the project/programme operates, especially as it affects identified
risks and assumptions, but also any unexpected considerations that may arise. It includes the
field as well as the larger political, institutional, funding, and policy context that affect the
project/programme. For example, a project in a conflict-prone area may monitor potential
fighting that could not only affect project success but endanger project staff and volunteers.

e. Beneficiary monitoring
It tracks beneficiary perceptions of a project/programme. It includes beneficiary satisfaction or
complaints with the project/programme, including their participation, treatment, access to
resources and their overall experience of change. Sometimes referred to as beneficiary contact
monitoring (BCM), it often includes a stakeholder complaints and feedback mechanism should
take account of different population groups, as well as the perceptions of indirect beneficiaries
(e.g. community members not directly receiving a good or service). For example, a cash-for
work programme assisting community members after a natural disaster may monitor how they
feel about the selection of programme participants, the payment of participants and the
contribution the programme is making to the community (e.g. are these equitable?).

47
f. Financial monitoring
This accounts for costs by input and activity within predefined categories of expenditure. It is
often conducted in conjunction with compliance and process monitoring. For example, a
livelihoods project implementing a series of micro-enterprises may monitor the money awarded
and repaid, and ensure implementation is according to the budget and time frame.

g. organizational monitoring
It tracks the sustainability, institutional development and capacity building in the
project/programme and with its partners. It is often done in conjunction with the monitoring
processes of the larger, implementing organization. For example, a National Society’s
headquarters may use organizational monitoring to track communication and collaboration in
project implementation among its branches and chapters.

3.4 Monitoring process

The International Federation of Red Cross and Red Crescent Societies have outlining six key
steps for project/programme monitoring. Taken together, these steps are to guide planning for
and implementing a monitoring system for the systematic, timely and effective collection,
analysis and use of project/programme information
They include:

a. Identify the purpose and scope of the monitoring system


The purpose and scope of the monitoring system answers the question “Why do we need
monitoring and how comprehensive should it be?” It serves as a reference point for the
monitoring system, guiding key decisions such as informational needs, methodological
approaches, capacity building and allocation of resources. The following outlines some key
considerations when determining monitoring system’s purpose and scope.

48
 Review the programme’s operational design (log frame); it is the foundation on which the
monitoring system is built. The log frame is a summary of the project/programme’s
operational design, based on the situation and problem analysis conducted during the
project/ programme’s design stage. It summarizes the logical sequence of objectives to
achieve the project/programme’s intended results (activities, outputs, outcomes and goal),
the indicators and means of verification to measure these objectives, and any key
assumptions.
 Identify key stakeholder informational needs and expectations. Planning an monitoring
system based on stakeholder needs and expectations helps to ensure understanding,
ownership and use of monitoring information. It is essential to have a clear understanding
of the priorities and information needs of people interested in or affected by the
project/programme. This includes stakeholder motivations, experience and commitment,
as well as the political and other constraints under which various stakeholders operate. It
is especially important that local knowledge is sought when planning monitoring
functions to ensure that they are relevant to and feasible in the local context, and that
information is credible, accepted and more likely to be supported.
 Identify any monitoring requirements. Important informational needs worth specific
attention are those that arise from any donor guidelines and requirements, governmental
laws and regulations, and internationally-agreed-upon standards. These requirements can
include very detailed procedures, formats and resources, and are often non-negotiable.
Therefore, it is best to identify and plan for them early in the monitoring planning
process.
 The scope of the monitoring system refers to its scale and complexity. It can be highly
complex with a variety of activities and requiring considerable expertise and resources, or
it can be relatively simple, relying on internal resources and capacities. Each of the point
discussed above plays a key role in determining the scope of the monitoring system. For
example, the complexity of a project/programme’s design (e.g. how many and the type of
outcomes it seeks to achieve) can have a significant impact on the scale and complexity
of the system. Likewise, donor requirements can largely determine the precision and
methodological rigour needed in the monitoring system.

49
b. Plan for data collection and management
Once you have defined the project/programme’s informational needs, the next step is to plan for
the reliable collection and management of the data so it can be efficiently analysed and used as
information. Both data collection and management are firmly linked as data management begins
the moment it is collected.
i. When planning for data collection, it is important to plan for the extent quantitative and
qualitative data will be used.
 Quantitative data measures and explains what is being studied with numbers (e.g. counts,
ratios, percentages, proportions, average scores, etc.). Quantitative methods tend to use
structured approaches (e.g. coded responses to surveys) which provide precise data that
can be statistically analysed and replicated (copied) for comparison.
 Qualitative data explains what is being studied with words (documented observations,
representative case descriptions, perceptions, opinions of value, etc.). Qualitative
methods use semi-structured techniques (e.g. observations and interviews) to provide in-
depth understanding of attitudes, beliefs, motives and behaviours. They tend to be more
participatory and reflective in practice.
ii. Triangulation is the process of using different sources and/or methods for data collection.
Combining different sources and methods (mixed methods) helps to cross-check data and
reduce bias to better ensure the data is valid, reliable and complete. The process also
lends to credibility if any of the resulting information is questioned. Triangulation can
include a combination of primary and secondary sources, quantitative and qualitative
methods, or participatory and non-participatory techniques. For example:
 Example of triangulating data sources: When determining community perception of a
cash-for-work project, do not just include participants selected for the project, but also
some who did not take part as they may have a different perspective (e.g. on the selection
process for participating in the project). Also, include the views of the project staff,
partners and other local groups working in the project/programme area.
 Example of triangulating data collection methods: A household survey is conducted to
determine beneficiary perception of a cash-for-work project, and it is complemented by

50
focus group discussion and key informant interviews with cash-for-work participants as
well as other community members.
iii. Determining sample requirements, a sample is a subset of a whole population selected to
study and draw conclusions about the population as a whole. Sampling (the process of
selecting a sample) is a critical aspect of planning the collection of primary data. Most
projects/ programmes do not have sufficient resources to measure a whole population (a
census), nor is it usually necessary. Sampling is used to save time and money by
collecting data from a subgroup to make generalizations about the larger population.
iv. Prepare specific data collection methods/tools; they need to be prepared and ready for
use. Sometimes methods/tools will need to be newly developed but, more often, they can
be adapted from elsewhere.
v. Plan for data management; data management refers to the processes and systems for how
a project/programme will systematically and reliably store, manage and access
monitoring data. It is a critical part of the system, linking data collection with its analysis
and use. Poorly managed data wastes time, money and resources; lost or incorrectly
recorded data affects not only the quality and reliability of the data but also all the time
and resources invested in its analysis and use.

c. Plan for data analysis


Data analysis is the process of converting collected (raw) data into usable information. This is a
critical step of the monitoring planning process because it shapes the information that is reported
and its potential use. It is really a continuous process throughout the project/programme cycle to
make sense of gathered data to inform ongoing and future programming. Such analysis can occur
when data is initially collected, and certainly when data is explained in data reporting.
Data analysis involves looking for trends, clusters or other relationships between different types
of data, assessing performance against plans and targets, forming conclusions, anticipating
problems and identifying solutions and best practices for decision-making and organizational
learning. Reliable and timely analysis is essential for data credibility and utilization.

d. Plan for information reporting and utilization

51
Having defined the project/programme’s informational needs and how data will be collected,
managed and analysed, the next step is to plan how the data will be reported as information and
put to good use.

Reporting is the most visible part of the monitoring system, where collected and analysed data is
presented as information for key stakeholders to use. Reporting is a critical part of monitoring
because no matter how well data may be collected and analysed, if it is not well presented it
cannot be well used – which can be a considerable waste of valuable time, resources and
personnel. Sadly, there are numerous examples where valuable data has proved valueless
because it has been poorly reported on.

Identify the specific reporting needs/audience; reports should be prepared for a specific
purpose/audience. This informs the appropriate content, format and timing for the report. For
example, do users need information for ongoing project/programme implementation, strategic
planning, compliance with donor requirements, evaluation of impact and/or organizational
learning for future project/programmes?

Internal reporting is conducted to enable actual project/programme implementation; it plays a


more crucial role in lesson learning to facilitate decision-making – and, ultimately, what can be
extracted and reported externally. External reporting is conducted to inform stakeholders
outside the project/programme team and implementing organization; this is important for
accountability.

Day-to-day operations depend upon a regular and reliable flow of information. Therefore, special
attention should be given to the informational needs of the project/programme managers. They
will need timely information to analyse project/programme progress and critical issues, make
planning decisions and prepare progress reports for multiple audiences, e.g. superiors and
donors. In turn, project-level reports provide essential information for programme managers and
country directors to compare planned actions with actual performance and budget.

52
It is also critical to determine the reporting frequency and identify realistic reporting deadlines.
They should be feasible in relation to the time, resources and capacity necessary to produce and
distribute reports including data collection, analysis and feedback.

Once the reporting audience (who), purpose (why) and timing (when) have been identified, it is
then important to determine the key reporting formats that are most appropriate for the intended
user(s). This can vary from written documents to video presentations posted on the internet.
Sometimes the reporting format must adhere to strict requirements, while at other times there can
be more flexibility.

To encourage information utilization, information needs to be dissemination. Common


dissemination methods include:
 Print materials distributed through mail or in person
 Internet communication, e.g. e-mail (and attachments), web sites, blogs,
 Radio communication includes direct person-to-person radio (ham radio), as well as
broadcasting radio.
 Telephone communication includes voice calls, text-messaging, as well as other functions
enabled on a mobile phone.
 Television and filmed presentations
 Live presentations, such as project/programme team meetings and public meetings.

Decision-making and planning really form the heart of data utilization. But no matter how well
the information is prepared or disseminated, it will ultimately be up to the user to decide when
and how to put it to use.

e. Plan for monitoring human resources and capacity building


An effective monitoring system requires capable people to support it. It is also important to plan
for the people responsible for the monitoring processes, including data management, analysis,
reporting and training.

53
A first step in planning for monitoring human resources is to determine the available monitoring
experience within the project/programme team, partner organizations, target communities and
any other potential participants in the monitoring system. It is important to identify any gaps
between the project/programme’s monitoring needs and available personnel, which will inform
the need for capacity building or outside expertise.

Key questions to guide this process include:


 Is there existing monitoring expertise among the project/programme team? How does this
match with the monitoring needs of the project/programme?
 Is there monitoring support from the organization implementing the project/programme?
For instance, is there a technical unit or individuals assigned with monitoring
responsibilities to advise and support staff, and if so, what is their availability for the
specific project/programme?
 Do the target communities (or certain members) and other project/programme partners
have any experience in monitoring?

Determine the extent of local participation because data collection and analysis is undertaken
with the very people to whom these processes and decisions most relate. This is an important
principle which prioritizes the involvement of local volunteers and communities to build local
capacity to sustain the project/programme.

Determine the extent of outside expertise; outside specialists are usually employed for technical
expertise, objectivity and credibility, to save time and/or as a donor requirement. Clearly, and
especially for external monitoring, experience, reliability and credibility are essential when
considering whether or not to use outside expertise

Whether project/programme staff, volunteers, community members, or other partners involved in


the monitoring system, it is important to develop tools and mechanisms to manage their time and
performance.

54
Once roles and responsibilities have been determined, it is important to specify any monitoring
training requirements. For longer-term projects/programmes, or those with significant training
needs, it may be useful to create a monitoring training schedule (planning table), identifying key
training sessions, their schedule, location, participants and allocated budget.

Monitoring training can be formal or informal. Informal training includes on-the-job guidance
and feedback, such as mentorship in completing checklists, commenting on a report or guidance
on how to use data management tools.
Formal training can include courses and workshops on project/programme design (log frames),
planning, data collection, management, analysis and reporting, etc. Formal training should be
tailored towards the project/programme’s specific needs and audience. This can involve an
outside trainer coming to the project/programme team/site, sending participants to training/
workshops, online training or academic courses.

f. Prepare the monitoring budget


Itemize monitoring budget needs; if the monitoring planning has been approached
systematically, identifying key steps and people involved, detailing budget items should be
straightforward. Start by listing the monitoring tasks and associated costs.

Examples of budget items include:


Human resources - budget for staffing, including full-time staff, external consultants, capacity
building/training and other related expenses, e.g. translation, data entry for baseline surveys, etc.
Capital expenses - Budget for facility costs, office equipment and supplies, any travel and
accommodation, computer hardware and software, printing, publishing and distributing M&E
documents, etc.

Costs associated with regular project/programme monitoring should be included in the


project/programme budget, rather than as part of the organization’s overhead (organizational
development or administrative costs). Therefore, the true cost of a project/programme will be
reflected in the budget. Otherwise, including monitoring costs as an administrative or
organizational development cost may incorrectly suggest inefficiencies in the project/programme

55
and the implementing organization, with donors reluctant to cover such costs when in reality
they are project-related costs. Ideally, financial systems should allow for activity-based costing
where monitoring costs are linked to project/ programme activities being monitored.

3.5 Benefits of monitoring


The following are some of the benefits for program monitoring
1. Analyze the current situation during program implementation
2. Discover trends and patterns arising during project implementation
3. Identify potential problems at an early stage and propose possible solutions.
4. Monitor the efficiency with which the different component of the project is being
implemented and suggests improvement
5. Make decisions about human, financial, and material resources.
6. Providing project management staff and other stakeholders with information on whether
progress is being made towards achieving project objective in this regard, monitoring
represents a continuous assessment of project implementation in return to project plans,
resources, infrastructure, and use of services by project beneficiaries.
7. Checking on conditions or/situations of a target group and changes brought about by project
activities. In this regard, monitoring assists project management to check whether the project
continues to be relevant to the target group and/or geographical area, and whether project
assumptions are still valid.
8. Enabling managers and staff to identify potential project weaknesses, problems and
shortcoming before it is too late. This would provide managers with the opportunity to make
timely adjustments and corrective actions to improve the project design work plan and
implementation strategies.

Deliverables from monitoring include


 Written status reports
 Updates to lists of action items, risks, problems, and issues
 Updates to the plan and schedule, to reflect actual progress

56
 Comparisons of actual costs to budgeted costs, as well as the cost/benefit analysis used
when starting the project

3.6 Challenges encountered in monitoring

The following challenges may be encountered during program monitoring


1. Lack of resources
2. Lack of skills and knowledge
3. Distraction from the bigger programs goals and objectives
4. Resistance by project development team
5. Tools and techniques
6. Political nature of programs
7. Lack of clearly defined program objectives and goals
8. It requires high level of dedication and commitment to succeed.
9. It requires time which might be difficult to get and sticking to it.
10. Uncooperative program team members and other stakeholders.

3.7 Summary

The lecture has covered the concept of monitoring and the process of developing a
monitoring system. We have also looked at benefits and challenges of monitoring
programs. We shall look at program evaluation in our next lecture.

57
3.8 Activity

3.9

1. Identify any Non-profit organization in your County. Find out how they track the
progress of their program/s.
In your opinion, is the monitoring system identified above (Q1) effective? If not, what
can they do to improve their monitoring practice?

Further Reading

 Read further on developing a monitoring system. Different organizations propose


different methods of coming up with a monitoring system. In this lesson, we used
The International Federation of Red Cross and Red Crescent Societies method.
Find out what processes are suggested by other Development organizations.
3.10 Self Test Questions

 What does the term monitoring refer to?


 As a program manager for an organization working in the field of health, how can
you ensure that activities are being implemented as scheduled?
 Identify and describe four deliverables from monitoring systems

58
3.11 Glossary

Monitoring refers to a continuing function that aims primarily to provide the management and
main stakeholders of an ongoing intervention with early indications of progress, or lack thereof,
in the achievement of results.

3.12 Answers to the Self-Test Questions

1. What does the term monitoring refer to?

 It refers to the continued tracking of program activities and regular collection of


information to assist timely decision making, ensure accountability, and provide the
basis for evaluation and learning. It aims at providing regular oversight of the
implementation of an action in terms of input delivery, work schedules, targeted
outputs etc. Monitoring actions must be undertaken throughout the lifetime of the
program.

2. As a program manager for an organization working in the field of health, how can you
ensure that activities are being implemented as scheduled?

 Have full knowledge of the health program including the log frame, work plan and
monitoring plan in order to make comparison between planned and actual activities.

 Ensure data is being collected on ongoing activities regularly


 Ensure adequate management of collected data (storage, analysis, presentation)
 Provide channels for sharing collected data to program staff and management
 Provide opportunities for using collected data for decision making
 Provide required resources for planning, collecting, managing and sharing data

59
 Allocate responsibilities concerning collecting, managing and sharing of data
 Capacity build program staff on program management; planning, design, monitoring &
evaluation

3. Identify and describe four deliverables from monitoring systems

 Activity progress report


 Updates to items, risks and problems plans
 Updates to work plans
 Updates to budget plans

References
1. Chikati, J. (2009). Monitoring and Evaluation Handbook. Nairobi. Regional
Partnership for Resource Development Publication.
2. International Federation of Red Cross and Red Crescent Societies. (2011).
Project/programme monitoring and evaluation (M&E) guide. Geneva. IFRC
& RCS.
3. Jeffrey, C. 2012). Program Monitoring & Visualization: An exploratory Approach.
United States. Springer.

60
4. Levin, G. (2012). Program Management: A Life Cycle Approach. London. Taylor &
Francis.
5. Pathfinder International. (2009). Program Design and Management. United States.
Pathfinder.
6. Thiry, M. (2012). Program Management: Fundamentals of Project Management.
England. Gower Publishing Limited.

LECTURE 4

PROGRAM EVALUATION

4.0 Introduction

The following lesson will focus on the topic of program evaluation. The lesson will also
highlight some benefits and challenges of evaluating programs.

4.1 Objectives

By the end of this topic you should be able to;

61

 Understand evaluation in the context of development programs
 Evaluate programs

4.2 Lecture Outline

 The concept of evaluation


 Evaluation process
 Benefits of evaluating programs
 Challenges of evaluating programs

4.3. The concept of evaluation


Evaluation is the systematic collection of information about a program that enables stakeholders
to better understand the program, improve its effectiveness, and/or make decisions about future
programming.
It can also be referred to systematic and objective assessment of ongoing or completed projects
or programs in terms of their design, implementation and results.
Evaluation usually deals with strategic issues such as program/project relevance, effectiveness,
efficiency (expected and unexpected), in light of specified objectives as well as program/project
impact and sustainability.

A Brief History of Evaluation


Evaluation has not always been - and still is not always - viewed as a tool to help those involved
with a program to better understand and improve it.
Historically, evaluation focused on proving whether a program worked, rather than on improving
it to be more successful. This focus on proof has meant that “objective,” external evaluators
conducted the evaluations.

62
Research designs used rigorous scientific standards, using control or comparison groups to assess
causation. Evaluations occurred at the end of a project and focused only on whether the program
was a success or failure; it did not seek to learn what contributed to or hindered success.
Finally, this type of evaluation often disengaged program staff and others from the evaluation
process; these stakeholders rarely learned answers to their questions about a program and rarely
received information to help them improve the program.

Evaluation as an empowerment tool


Evaluation can be a form of empowerment. Participatory evaluation empowers an organization
to define its own success, to pose its own evaluation questions, and to involve stakeholders and
constituents in the process. Rather than being imposed from the outside, evaluation can help
program stakeholders identify what a program is expected to accomplish (and when), thereby
making sure everyone’s expectations for the program are aligned. By looking systematically at
what goes into a program, what the program is doing and producing, and what the program is
achieving, this evaluation approach enables program stakeholders both to be accountable for
results and to learn how to improve the program.
https://youtu.be/fE3dk95rtIk
123ILOVEjESUS

Evaluation Principles
Evaluation is most effective when it:
 Links to program planning and delivery. Evaluation should inform planning and
implementation. Evaluation shouldn’t be done only if you have some extra time or only
when you are required to do it. Rather, evaluation is a process integral to a program’s
effectiveness.
 Involves the participation of stakeholders. Those affected by the results of an evaluation
have a right to be involved in the process. Participation will help them understand and
inform the evaluation’s purpose. Participation will also promote stakeholder contribution
to, and acceptance of, the evaluation results. This increases the likely use of the
evaluation results for program improvement.

63
 Supports an organization’s capacity to learn and reflect. Evaluation is not an end in itself;
it should be a part of an organization’s core management processes, so it can contribute to
ongoing learning.
 Respects the community served by the program. Evaluation needs to be respectful of
constituents and judicious in what is asked of them. Evaluation should not be something
that is “done to” program participants and others affected by or associated with the
program. Rather, it should draw on their knowledge and experience to produce
information that will help improve programs and better meet the needs of the community.
 Enables the collection of the most information with the least effort. You can’t—and
don’t need to—evaluate everything! Focus on what you need to know. What are the
critical pieces of information you and your stakeholders need to know to remain
accountable and to improve your program?

Types of Evaluations (based on when it is conducted)

1. Ex-ante evaluation (formative)


It’s the type of evaluation undertake before the starting of the project and it examines the
feasibility of the project.

2. Concurrent/on-going evaluation
This is undertaken during the project implementation and its analysis the relationship between
the project outputs and its effects for the purposes of adapting the project to changes in the
environment.

3. Ex-post Evaluation (summative)


This is undertaken after the project has been implemented, and it examine the effectiveness of
the project in achieving its stated goals and the types of changes resulting from the project

Types of evaluations (how it is conducted)

64
1. Self-evaluation
This involves an organisation or project holding up a mirror to itself and assessing how it is
doing, as a way of learning and improving practice. It takes a very self-reflective and honest
organisation to do this effectively, but it can be an important learning experience.

2. Participatory evaluation
This is a form of internal evaluation. The intention is to involve as many people with a direct
stake in the work as possible. This may mean project staff and beneficiaries working together on
the evaluation. If an outsider is called in, it is to act as a facilitator of the process, not an
evaluator.

3. Rapid Participatory Appraisal


Originally used in rural areas, the same methodology can, in fact, be applied in most
communities. This is a qualitative way of doing evaluations. It is semi-structured and carried out
by an interdisciplinary team over a short time. It is used as a starting point for understanding a
local situation and is a quick, cheap, useful way to gather information. It involves the use of
secondary data review, direct observation, semi-structured interviews, key informants, group
interviews, games, diagrams, maps and calendars. In an evaluation context, it allows one to get
valuable input from those who are supposed to be benefiting from the development work. It is
flexible and interactive.
4. External evaluation
This is an evaluation done by a carefully chosen outsider or outsider team.

5. Interactive evaluation
This involves a very active interaction between an outside evaluator or evaluation team and the
organisation or project being evaluated. Sometimes an insider may be included in the evaluation
team.

Internal and External Evaluations


Evaluations can be carried out or conducted by an internal or external group or combination of
the two. The choice of who conducts evaluation depends on the purpose of the evaluation.

65
Internal evaluations
Evaluations are conducted by the project, program, or organizational managers themselves.
They are advantageous in the internally led evaluations are that the project staff generally feel
less threatened, the evaluations will more likely directly address pertinent management questions
and consequently managers are more likely to use the results. They are also less expensive.
Advantages
 The evaluators are very familiar with the work, the organizational culture and the aims
and objectives.
 Sometimes people are more willing to speak to insiders than to outsiders.
 An internal evaluation is very clearly a management tool, a way of self-correcting, and
much less threatening than an external evaluation. This may make it easier for those
involved to accept findings and criticisms.
 An internal evaluation will cost less than an external evaluation.
Disadvantages
 The evaluation team may have a vested interest in reaching positive conclusions about
the work or organisation. For this reason, other stakeholders, such as donors, may prefer
an external evaluation.
 The team may not be specifically skilled or trained in evaluation.
 The evaluation will take up a considerable amount of organisational time – while it may
cost less than an external evaluation, the opportunity costs may be high.

External evaluation
These evaluations are conducted by a neutral outside party according to a defined scope of work.
Advantages are that they are less likely to be biased, they can tap into the expertise and
experience of the outside evaluators and the results may be viewed as being more credible by
third parties.

Advantages
 The evaluation is likely to be more objective as the evaluators will have some distance
from the work.

66
 The evaluators should have a range of evaluation skills and experience.
 Sometimes people are more willing to speak to outsiders than to insiders.
 Using an outside evaluator gives greater credibility to findings, particularly positive
findings.

Disadvantages
 Someone from outside the organisation or project may not understand the culture or even
what the work is trying to achieve.
 Those directly involved may feel threatened by outsiders and be less likely to talk openly
and cooperate in the process.
 External evaluation can be very costly.
 An external evaluator may misunderstand what you want from the evaluation and not
give you what you need.

4.4 Evaluation process

4.4.1 Identify what to evaluate


Evaluation planning identifies and organizes questions you have about your program and plots a
route to get answers. Most questions that organizations probe through evaluation are in three
categories:
1. What did we do?
2. How well did we do it?
3. What difference did our program make? (What changes occurred because of our
program?)
The program’s logic model will form the foundation of your evaluation plan. As you look at your
logic model, you will find questions about your program that you hope to answer. The purpose of
evaluation planning is to identify these questions and plan a route to finding the answers.

Two major forms of evaluation help answer these questions.

67
1) Implementation Evaluation; Are you performing the services or activities as planned? Are
you reaching the intended target population? Are you reaching the intended number of
participants? Is it leading to the products you expected? How do the participants perceive these
services and activities? These questions are about implementation.
2) Outcomes Evaluation; is your target audience experiencing the changes in knowledge,
attitudes, behaviours, or awareness that you sought? What are the results of your work? What is
it accomplishing among your target audience? These questions are about outcomes.

An effective evaluation should answer both (implementation and outcome) types of questions.
You can do one without the other, but you will not learn as much as if you conduct both types.

NOTE

In the past, NGO’s evaluation focused on documenting and reporting on program


activities. They assumed that if it implemented program activities as planned, desired
results would occur for the individuals, families, organizations, or communities they
served. The non-profit community focused on reporting on implementation to the
exclusion of outcomes. In recent years, the pendulum has swung in the opposite
direction: non-profits are under pressure to measure and report outcomes, with little
emphasis on implementation.

Evaluating Implementation:
i. What Did You Do?
Implementation evaluation plan starts with identification of program activities. The activities are
the actions that the program takes to achieve desired outcomes. If your program entails many
activities, you may have organized these activities into activity categories—closely related
groups of activities in your program.

68
Outputs are the tangible products of your program’s activities. Outputs are also the evidence of
your activities. In implementation evaluation, outputs are the items you will actually measure to
evaluate your activities. Measuring outputs answers the question: What Did We Do? This is
often the easiest and most direct process in evaluation
ii. How Well You Did It
Documenting activities and associated outputs tells us what was done. However, that isn’t
sufficient for evaluation purposes (after all, you already had your activities and outputs identified
in your logic model). The purpose of implementation evaluation is to understand how well you
did it.

The next step is to identify other questions you have about your activities and their outputs.
What information will help you better understand the implementation of your program? The
following are examples of the types of questions you might consider
 Participation; did the targeted audience participate in the activities as expected? Why?
Were some individuals’ over- or under-represented? Why?
 Quality; were the services/materials you provided perceived as valuable by the intended
audience? Were they appropriate? How did others in the field view their quality?
 Satisfaction; did those affected by your program’s services approve of them? Why? Who
was most/least satisfied?
 Context; what other factors influenced your ability to implement your program as
planned? What political, economic, or leadership issues intervened, changing the
expected outcomes in your program?
Implementation evaluation offers important information about what was done and how well it
was done. The lessons you learn can serve as benchmarks for progress against your original
program plan.
 Perhaps you aren’t conducting the activities as planned; or
 You are conducting those activities, but they are not leading to the products/outputs you
intended, or
 They did lead to the intended outputs, but the quality or satisfaction levels are not what
you had hoped.

69
This information can help determine if there is need to adjust current plans, change activities, or
reconsider your theoretical assumptions. Evaluating your implementation can provide a feedback
loop in the midst of your effort, before you may be able to evaluate outcomes.

Evaluating Outcomes: What Difference Did You Make?


These days it’s no longer acceptable for non-profits to assume that good intentions, good-faith
effort, or even exemplary program implementation will result in the desired outcomes for those
we serve. It is important to spend time developing a plan to measure the achievement of
outcomes.
In the logic model, desired outcomes were identified – these are changes you expect to see as a
result of your program. Outcomes are frequently expressed as changes in knowledge, skill,
attitudes, behaviour, motivation, decisions, policies, and conditions. They occur among
individuals, communities, organizations, or systems.

In order to evaluate how successfully you have achieved your outcomes, you will need to
determine indicators for your outcomes.
An indicator is the evidence or information that will tell you whether your program is achieving
its intended outcomes. Indicators are measurable and observable characteristics. They answer the
question: “How will we know that change occurred?”

We often state outcomes as abstract concepts or ambitions. Indicators are the measurement of
outcomes. They are specific characteristics or behaviours that provide tangible information about
those concepts or ambitions. Often, one outcome will have more than one indicator. When you
develop your indicators, it may be helpful to ask: “What does the outcome look like when it
occurs? How will I know if it has happened? What will I be able to see?”

An indicator should be:


 Meaningful; the indicator presents information that is important to key stakeholders of
the program. Keep in mind that different people can have different perceptions about
what defines “success” for a program. Reaching consensus among key stakeholders

70
regarding what success looks like is essential to ensuring buy-in to your evaluation
results.
 Direct; the indicator or combination of indicators captures enough of the essential
components of the outcome to represent the outcome. Several indicators can be
necessary to measure an outcome adequately. However, there is no standard for the
number of indicators to use. While multiple indicators are often necessary, more than
three or four may mean that the outcome is too complex and should be better defined. An
indicator must also reflect the same type of change as the outcome. For example, if an
outcome is about a change in attitude or opinion, the indicator should not reflect a
behaviour change.
 Useful; the information provided by this indicator can be put to practical use for program
improvement.
 Practical to Collect; the data for the indicator shouldn’t be a burden to collect. Consider
whether you can collect data about your indicator in a timely manner and at reasonable
cost. Sometimes an indicator meets the other criteria described above, but the effort to
collect it would be too burdensome. Our evaluation template offers you an opportunity to
note the level of effort involved.

Elements of a Strong Indicator Statement


To assist in evaluation, a strong indicator statement should include these four elements:

 How much; identify the amount of change among your target population that would
indicate a successful level of achievement. This sets the target for your work; base this
on an understanding of your baseline and a level of change that is reasonable for your
program.
 Who; specify the target population you will measure.
 What; describe the condition, behaviour, or characteristic that you will measure.
 When; note the timeframe in which this change should occur.

The following are examples of outcomes and indicators.


Outcome Indicator

71
New mothers increase their knowledge of child 75% of new mothers in the program
development. satisfactorily complete a short survey about
child development at the end of the course
Target audiences increase knowledge about the 50% of community focus group members can
signs of child abuse and neglect identify the signs of child abuse and neglect six
months after education campaign ends
Residents feel neighbourhood is a safer place 60% of neighbourhood residents report in one
for children year that they believe the neighbourhood is
safer for children than it was one year before.
Youth have increased knowledge about the At end of course -90% of participants report
consequences of long term Bang use/abuse. that they gained knowledge about the
risks/harms associated with Bang use
-80% report that it is important not to use
alcohol or other drugs.

4.4.2 Data Collection Methods


This answers the question what’s the best way to gather the information you need?
So far, you have identified what you want to evaluate and what you will measure. In
implementation evaluation, these are activities and their related outputs and additional questions.
In outcome evaluation, these are program outcomes and their related indicators.

Now you will consider methods to collect the data. Outputs, implementation questions, and
indicators are what you will measure; data collection methods are how you will measure these.

To choose the best data collection methods, consider:


 Which methods will be least disruptive to your program and to those your serve?
 Which methods can you afford and implement well?
 Which methods are best suited to obtain information from your sources (considering
cultural appropriateness and other contextual issues)?

72
Common data collection strategies

1. Review documents
Analysis of printed material including program records, research reports, census data, health
records, budgets. Document review is a common method of collecting data about activities and
outputs for implementation evaluation.

2. Observe
Observe situations, behaviours and activities in a formalized and systematic way, usually using
observational checklists and trained observers. This is a good method to use in settings where
experiencing actual events or settings (rather than hearing about them) is an important part of the
evaluation.

3. Talk to people
Collect verbal responses from participants and other stakeholders through interviews (in-person
or phone) or focus groups. This method is helpful when it is important to hear complex or highly
individual thoughts of a certain group of individuals.

4. Collect written responses from people


Collect written responses through surveys (in-person, e-mail, online, mail, phone), tests, or
journals/logs. Except in the case of journals, this method is often used when you need a lot of
information from a large number of people or when it is important that identical information be
available from all respondents.

5. Other methods
Review pictorial/multi-media data in photographs, audiotapes, compact discs, visual artwork.
Conduct expert or peer reviews in which professionals in the field with specific expertise assess

73
a set of activities or products. Use a case study, an intensive investigation of one unit to use for
learning purposes, often as an exemplar or model to be avoided.

NOTE

Consider the cost and time required to create new data collection tools. Also, consider
the cost involved in actually collecting and analysing the data. Some methods are more
expensive than others. For each data collection method you identify, consider whether
you already have a tool in place that you could use (such as an intake survey). If not,
think about the amount of effort required to create and use the new data collection tool.

4.5 Benefits of evaluating programs


Conducting a well-conceived and implemented evaluation will help you to:

i. Understand and improve your program


Even the best-run programs are not always complete successes. Every program can improve; the
information collected in an evaluation can provide guidance for program improvement. As you
incorporate evaluation into your ongoing work, you will gain useful information and become a
“learning organization” —one that is constantly gathering information, changing, and improving.

ii. Test the theory underlying your program


The systematic data you collect about your program’s short-, intermediate and long-term
achievements as well as its implementation helps you to understand whether (and under what
conditions) the hypotheses underlying your program are accurate, or whether they need to be
modified.

iii. Tell your program’s story

74
The data collected through evaluation can provide compelling information to help you describe
what your program is doing and achieving. Evaluation results provide a strong framework for
making your program’s case before stakeholders, funders, and policy-makers.

iv. Be accountable
Evaluation helps you demonstrate responsible stewardship of funding money

v. Inform the field


Non-profits that have evaluated and refined their programs can share credible results with the
broader non-profit community. A community that can share results can be more effective.

vi. Support fundraising efforts


A clear understanding of your program—what you did well, and precisely how you
accomplished your outcomes—helps you raise additional funds to continue your work and
expand or replicate your efforts.

4.6 challenges of evaluating programs


The same challenges encountered during monitoring could also be encountered when conducting
evaluation, they include:
1. It requires time which might be difficult to get and sticking to it
2. Lack of resources
3. Lack of skills and knowledge in evaluation
4. Distraction from the bigger programs goals and objectives
5. Resistance by project development team
6. Tools and techniques
7. Political nature of programs
8. Lack of clearly defined program objectives and goals and other project indicators such as
milestones and deliverables that hinders what to evaluate.
9. It requires high level of dedication and commitment to succeed.

75
10. It is difficult to identify realistic quantitative and qualitative indicators to be measured.
11. Uncooperative program team members and other stakeholders.

4.7 Summary

In this topic, we have discussed the meaning of evaluation, types of evaluation,


planning for evaluation, and benefits and challenges of evaluating programs. We shall
now move on to our next topic which is Result Based Monitoring and evaluation
Systems.

1. Identify any Non-profit organization in your County. Find out their evaluation
approach.
In your opinion, is the system identified above (Q1) effective? If not, what can they do to
improve their evaluation practice?

4.8 Activity
4.9 Further Reading

 Research more on how Non-profit organizations nationally and internationally


evaluate their programs? Do they prefer implementation or Outcome evaluation?
What other approaches do they use?

76
4.10 Self Test Questions

 Explain two factors constraining effective evaluation of programs


 Illustrate the importance of having good indicators to measure program outcomes
 Compare and contrast internal and external evaluations
 List and explain any two types of evaluations
 What principles does a program manager need to consider to ensure effective and
efficient evaluations?

4.11 Glossary

1. Evaluation refers to systematic and objective assessment of ongoing or completed


projects or programs in terms of their design, implementation and results.
2. Indicator it is the evidence or information that will tell you whether your program is
achieving its intended outcomes.

4.12 Answers to the Self-Test Questions

1. Explain two factors constraining effective evaluation of programs

 Inadequate resources including financial, human and time. These resource shortcomings
may hinder proper planning and data collection which interferes with the findings of the
program. Lack of capacity to identify correct evaluation and carry it out may also

77
produce inaccurate information which may mis-guide decision makers. Inadequate
timelines for conducting evaluation may also hinder proper planning and collection of
data resulting into poor feedback.

 Lack of clearly defined program objectives, goals and indicators such as milestones and
deliverables that hinders what to evaluate. Evaluation questions arise from the designed
program, mostly importantly the log frame. Therefore if it has not been properly
designed, it might create challenges to the evaluators, for example, evaluators may find it
difficult to decide what to evaluate – what information will be needed to ascertain that the
process and outcome of the program was successful if they have confusing/not so clear
activities or outcomes.

2. Illustrate the importance of having good indicators to measure program outcomes

 Indicator presents information that is important to key stakeholders of the program.


 The indicator or combination of indicators captures enough of the essential components
to represent the outcome
 The information provided by this indicator can be put to practical use for program
improvement.
 To facilitate easy collection of data about the indicator in a timely manner and at
reasonable cost.

3. Compare and contrast internal and external evaluations

In internal evaluations, the evaluation process is conducted by the project, program, or


organizational managers themselves while in external evaluation; it is conducted by a neutral
outside party according to a defined scope of work.

4. List and explain any two types of evaluations

78
Formative evaluation begins during program development and continues throughout the life of
the project. Its intent is to assess ongoing activities and provide information for evaluation and
improvement of the program.

Summative evaluation is to assess a mature program’s success in reaching its stated goals.
Summative evaluation (sometimes referred to as impact or outcome evaluation) frequently
addresses many of the same questions as a progress evaluation, but it takes place after the
program has been established and the timeframe posited for change has occurred.

5. What principles does a program manager need to consider in ensuring effective and
efficient evaluations?

 Evaluation involves participation of stakeholders. Those affected by the results of an


evaluation have a right to be involved in the process. Participation will help them
understand and inform the evaluation’s purpose. Participation will also promote
stakeholder contribution to, and acceptance of, the evaluation results. This increases the
likely use of the evaluation results for program improvement.
 Evaluation supports an organization’s capacity to learn and reflect. Evaluation is not an
end in itself; it should be a part of an organization’s core management processes, so it can
contribute to ongoing learning.

References

1. Chikati, J. (2009). Monitoring and Evaluation Handbook. Nairobi. Regional


Partnership for Resource Development Publication.

79
2. International Federation of Red Cross and Red Crescent Societies. (2011).
Project/programme monitoring and evaluation (M&E) guide. Geneva. IFRC
& RCS.
3. Jeffrey, C. 2012). Program Monitoring & Visualization: An exploratory Approach.
United States. Springer.
4. Levin, G. (2012). Program Management: A Life Cycle Approach. London. Taylor &
Francis.
5. Martens, D. and Wilson, A. (2012). Program Evaluation: Theory and Practice: A
Comprehensive Guide. New York. Guilford Press.
6. Thyer, B. and Padgett, D. (2010). Program Evaluation: An Introduction, 5th ed. USA.
Wadsworth.

80
LECTURE 5

RESULT BASED MONITORING AND EVALUATION

5.0 Introduction

Welcome to this lesson, we shall focus on Results Based Monitoring and Evaluation. In the
previous two lectures, we have explored traditional monitoring and evaluation which focuses
more on inputs and outputs. In Result based M&E we emphasize outcome and impact of
programs and how such information can be used by high level decision makers.

5.1 Objectives

By the end of this topic you should be able to;


 Understand the differences between Result based M&E and the
traditional monitoring and evaluation
 Conduct a result based monitoring and evaluation system

5.2 Lecture Outline

 Concept of result based monitoring and evaluation (M&E)


 Application of result based monitoring and evaluation
 Building a Result-Based M&E System

81
5.3. Concept of Result based Monitoring and Evaluation

Results-based monitoring and evaluation (M&E) is a powerful public management tool that can
be used to help policymakers and decisionmakers track progress and demonstrate the impact of a
given project, program, or policy. Results-based M&E differs from traditional implementation-
focused M&E in that it moves beyond an emphasis on inputs and outputs to a greater focus on
outcomes and impacts.

Building and sustaining result-based M&E systems is not easy. It requires continuous
commitment, time, effort, and resources-and champions-but it is doable. Once the system is
built, the challenge is to sustain it. There are many political, organizational, and technical
challenges to overcome in building these systems-both for development and developing
countries. Building and sustaining such systems is primarily a political process, and less so, a
technical one. There is no one correct way to build such systems, and many countries and
organizations will be at different stages of development with respect to good public management
practices in general, and M&E in particular. It is important to recognize that results-based M&E
systems are continuous work in progress.

Monitoring and Evaluation: What Is It All About?

Credible answers to the “so what” question address the accountability concerns of stakeholders,
give public sector managers information on progress toward achieving stated target and goals,
and provide substantial evidence as the basis for any necessary mid-course corrections in
policies, programs, or projects.

Building an M&E system essentially adds that fourth leg to the governance chair. What typically
has been missing from government systems has been the feedback component with aspects to
outcomes and consequences of governmental actions. This is why building an M&E systems
gives decisionmakers an additional public sector management tool.

82
Monitoring is a continuous function that uses the systematic collection of data on specified
indicators to provide management and the main stakeholders of an ongoing development
intervention with indications of the extent of progress and achievement of objectives and
progress in the use of allocated funds .

Evaluation is the systematic and objective assessment of an ongoing or completed project,


program, or policy, including its design, implementation, and results. The aim is to determine
the relevance and fulfillment of objectives, developments efficiency, effectiveness, impact, and
sustainability. An evaluation should provide information that is credible and useful, enabling the
incorporation of lessons learned into the decision-making process of both recipients and donors.

In juxtaposing these two definitions, it is immediately evident that they at distinct yet
complementary. Monitoring gives information where a policy, program, or project is at any
given time (and over time) relative to respective targets and outcomes. It is descriptive in intent.
Evaluation gives evidence of why targets and outcomes are or are not being achieved. It seeks to
address issues of causality. Of particular emphasis here is the expansion of the traditional M&E
function to focus explicitly on outcomes and impacts.

Evaluation is a complement to monitoring in that when a monitoring system sends signals that
the efforts are going off track (for example, that the target population is not making use of the
services, that costs are accelerating, that there is real resistance to adopting an innovation, and so
forth), then good evaluative information can help clarify the realities and trends noted with the
monitoring system. For example, “If annual performance information is presented by self (in
isolation) without the context and benefit of program evaluation, there is a danger of program
managers, legislators…and others drawing incorrect conclusions regarding the cause of
improvements or declines in certain measures…simply looking at trend data usually cannot tell
us how effective our government programs interventions were”. We stress the need for good
evaluative information throughout the lifecycle of initiative-not just at the end-to try and
determine causality.

83
Monitoring can be done at the project, program, or policy levels. For example, in looking at
infant health, one could monitor the project level by monitoring the awareness of good prenatal
care in six targeted villages. At the program level one could monitor to ensure that information
on prenatal care is being targeted to pregnant women in a whole region of the country. At the
policy monitoring level, the concern could be to monitor the overall infant morbidity and
mortality rates for that same region.

Evaluation, like monitoring, may be conducted at the project, program, or policy level. To take
an example of privatizing water systems, a project evaluation might involve the assessment of
the improvement in water fee collection rates in two provinces. At the program level, one might
consider assessing the fiscal management of the government’s systems, while at the policy level,
one might evaluate different model approaches to privatizing public water supplies.

When we refer to evaluation in the context of an M&E systems, we are not solely referring to the
classical approach of determining attribution as embodied in the after-the-fact assessment of
projects, programs, or policies. Impact evaluations do (or at least try to) address attribution. But
we are viewing evaluation in a much broader context as a continuous available mode of analysis
that helps program managers gain a better understanding of all aspects of their work-from design
through implementation and on to completion and subsequent consequences.

Evaluation has also been used for different purposes over the years. In the OECD countries, for
example, early evaluation in the 1960s and 1970s studied ways of improving social programs.
Later in the 1980s and 1990s, the governments used evaluation to conduct budgetary
management, for example, by examining ways to reduce expenditures and cut public programs.
As noted earlier, efforts to develop M&E systems have spread to developing countries-many
having been driven by the desire to meet specific donor requirements, international developing
goals, or, in some cases, both external and internal social and economic pressures.

Again, evaluation can be defined as assessment, as systematic and objective as possible, of a


planned, ongoing, or completed intervention. The aim is to determine the relevance of
objectives, efficiency, effectiveness, impact, and sustainability so as to incorporate lessons

84
learned into the decision-making process. Specifically, this kind of evaluation addresses: “why”
questions, that, is what caused the changes being monitored; “how” questions, or what was the
sequence or process that led to successful (or unsuccessful) outcomes; and “compliance and
accountability” questions, that is, did the promised activities actually take place and as planned?

5.4 Application of Result based Monitoring and Evaluation

There are many and growing applications from results-based M&E. As the needs for
accountability and demonstrable results have grown, so have the uses and applications for
results-based M&E systems.

Project, program, and Policy Applications

Results-based M&E systems have been successfully designed and used to monitor and evaluate
at all levels-project, program, and policy, information and data can be collected and analyzed at
any and levels to provide feedback at many points in time. In this way, the information can be
used to better inform key decision makers, the general public, and other stakeholders.

85
Monitoring and evaluation can and should be evident throughout the lifecycle of project,
program, or policy, as well as after completion. M&E- with its continuing streams of data and
feedback has added value at every stage from design through implementation and impact. “The
specific information will also be different at each level, the complexity of collecting data will be
different, the political sensitivity on collecting the data may change, and the uses of the
information may change from one level to another”

Internal and external Applications


M&E can also be conducted at local, regional, and national levels of government. So whether
one thinks of M&E in relation to levels of administrative complexity (project to program to
policy) or geographically, the application are evident – though they need not be identical. Again,
the specific indicators may necessarily be different for each level of government.

It should also be noted that functioning M&E system provides a continuous flow of information
that is useful both internally and externally. The internal uses come into play as the information
from the M&E system is used as crucial management tool for the public sector manager in
achieving results and meeting specific targets. Information on progress, problems, performance
are key to public manager striving to achieve. Likewise, the information from an M&E system is
important to see outside the public sector who are experiencing results, wanting to see
demonstrable impacts from government actions (and tax monies), and hoping to build trust in a
government that is striving to better the life of citizens.

Fundamentally, the M&E systems aids in thinking about and clarifying goals and objectives.
Governments and stakeholders can also use M&E systems for formulating the justifying
budgetary requests. In contrast to the earlier implementation-based approach, results-based M&E
focuses attention on achieving outcomes important to the organization and its internal and
external stakeholders.

86
M&E systems can help identify potentially promising programs or practices. They can also
identify unintended-but perhaps useful-projects, programs, and policy results. Conversely, M&E
systems can help managers identify program weakness and take action to correct them. An M&E
strategy can be used to diminish fear within organization and governments, and can instead
devise ways of instilling an open atmosphere in which people can learn from mistakes make
improvements, and create knowledge along the way.

Knowledge Capital
Good M&E systems are also a source of knowledge capital. They enable governments and
organizations to develop a knowledge base of the types of projects, programs, policies that are
successful, and, more generally, what works, what does not, and why. M&E systems can also
provide continuous feedback in the management process of monitoring and evaluation progress
toward a given goal. In this context, they promote organizational learning. Broad public access
to information derived from results-based M&E systems is also important in aiding economic
development both within and between countries. “Access to information is an essential
component of a successful development strategy. If we are serious about reducing global
poverty, we must liberate the access to information and improve its quality”.

Transparency and Accountability


M&E systems can also aid in promoting greater transparency and accountability within
organizations and governments. Beneficial spill over effects may also occur from shining a light
on results. External and internal stakeholders will have a clearer sense of the status of projects,
programs, and policies. The ability to demonstrate positive results can also help garner greater
political and popular support.

87
5.5 Building a Result-Based M&E System
Although experts vary on the specific sequence of steps in building a result-based M&E systems,
all agree on the overall intent. For example, different experts propose four or seven-step model.

Regardless of the number of steps, the essential actions involved in building an M&E systems
are to:
 Formulate outcomes and goals
 Select outcome indicators to monitor
 Gather baseline information on the current conditions
 Set specific target to reach and dates for reaching them
 Regularly collect data to assess whether the targets are being met
 Analyze and report the results.

Given the agreement on what a good system should contain, why are these systems not part of
the normal business practices of government agencies, stakeholders, lenders, and borrowers?
One evident reason is that those designing M&E systems often miss the complexities and
subtleties of the country, government, or sector context. Moreover, the needs of end users are
often only vaguely understood by those ready to start the M&E building process. Too little is
placed on organizational, political, and cultural factors.

In this context, the 10-step model presented here differs from others because it provides
extensive details on how to build, maintain-and perhaps most important –sustain a results-based
M&E systems. It also differs from other approaches in that it contains a unique readiness
assessment. Such an assessment must be conducted before the actual establishment of a system.
The readiness assessment is, in essence, the foundation of the M&E system. Just as a building
must begin with foundation, constructing an M&E system must begin with foundation of
readiness assessment. Without understanding of the foundation moving forward may be fraught
with difficulties and ultimately, failure.

88
Where to Begin: Whole-Of-Government, Enclave, or Mixed Approach

Governments around the world differ in their approaches to adopting results-based M&E
systems. There are essentially three approaches.
The first is the whole government approach that was adopted in some of the early M&E pioneer
countries. The whole of government approach involves a broad, comprehensive establishment of
M&E across the government.

Thus, the second approach is a more limited or enclave focused one. Many countries-especially
developing countries – may not yet be in a position to adopt such sweeping change in a
comprehensive fashion. Other, more targeted approaches are available, such as beginning with
the local, state or regional government levels, or piloting M&E systems in a few ministries or
agencies.

Yet a third strategy for applying a results-oriented program is a focus on a particular customer
group. For example, the government of Egypt wanted to improve its programs and services to
advance women’s issues. Each line ministry was expected to identify its current programs related
to gender issues and assess the performance of the programs. In addition, the National Council
of Women, a recently established government organization aimed at improving government
support to women, was to identify a set of key performance indicators that the government could
then track and monitor to achieve the established gender related goals. It is the responsibility of
the related ministries to track and monitor indicators for programs within their ministerial
control, and to closely monitor and evaluate related government programs to achieve results.

Steps in Results-based monitoring and evaluation system

STEP 1: CONDUCTING A READINESS ASSESSMENT


This step exists because it provides an analytical framework to assess a given country’s
organizational capacity and political willingness to monitor and evaluate its goals, and develop a
performance-based framework.

89
Why do a Readiness Assessment?
Experts have devised a number of different models for building M&E systems, but often miss the
complexities and nuances of the wider country context. The needs of the recipient country are
often only vaguely understood by those experts trying to provide technical assistance. For all the
good intentions to advance the design, creation, and use of results-based M&E systems, too little
emphasis is placed on existing political, organizational, and cultural factors and contexts.

Most of the existing models starts by jumping straight into building a results-based M&E
system-without even knowing where a given country stands in relation to a number of critical
factors, including organizational roles, responsibilities, and capabilities; incentives and demand
for such a system’ ability of an organization to sustain systems; and so forth. There are a few
models that pose key readiness questions.

Most experts look at the “what” questions-what are the goals? What are the indicators?-and not
the “why” questions: why do we want to measure something? Why is there a need in a
particular country to think about these issues? Why do we want to embark on building
sustainable results-based M&E systems?

To answer these “why” questions, there is a considerable amount of preparatory work to do


before the actual construction of a result based M&E system. That preparatory work takes the
form takes the form of assessment presented here. We will walk through, step-by-step, some of
the important issues, concerns, and questions that should be addressed before embarking on
building an M&E system.

Some might also pose the question: How does readiness assessment differ from a needs
assessment? Are they not the same thing? In fact, they are not. A needs assessment assumes
there are some fundamental, underlying questions as to whether governments need such systems.
A readiness assessment assumes that governments needs to have these systems, and addresses
whether governments are actually ready and able to move forward in building, using, and
sustaining the systems.

90
A readiness assessment provides the analytical framework for rating a country’s ability to
monitor and evaluate its progress in achieving designated development goals. It does this by
assessing a country’s current understanding capacity, and use of existing monitoring and
evaluation systems.

Key areas covered by a readiness assessment


The readiness assessment is a diagnostic tool that can be used to determine whether the
prerequisite are in place for building a results-based M&E systems. The readiness assessment
provides a guide through eight areas that must be considered and explored in determining a given
country’s or organization’s ability and willingness to adopt and move forward with results-based
M&E systems.

They include:
1. What potential Pressures are Encouraging the Need for the M&E system within the Public
sector and why?
It is important to know where the demand for creating an M&E system is emanating from and
why. Are the demands and pressures coming from internal, multilateral, or international
stakeholders, or some combination of these? These requests will need to be acknowledged and
addressed if the response is to be appropriate to the demand.

As noted earlier, internal demands may arise from calls for reforms in public sector governance
and for better accountability and transparency. Anti-corruption campaigns may be a motivating
force. Or political opponents may not trust the government’s intentions or actions.

91
Externally-pressures may arise from the donor community for tangible development results for
their investments. International organizations, such as the European Union, expect a feedback
system on public sector performance via M&E for each of the accession countries. The
competitive pressures of globalization may come into play, and the rule of law, a strong
governance system and clearly articulated rules of the game are now necessary to attract foreign
investments. Financial capital and the private sector are looking for a stable, transparent
investment climate, and protection of their property and patterns before committing to invest in a
country. There are a multitude of pressures that governments may need to respond to, and these
will drive the incentives for building a results-based system

2. Who is the Advocate for an M&E System?


Champions in governments are critical to the sustainability and success of a results-based M&E
system. A highly placed government champion can be a strong advocate for more well-informed
decision-making and can help defuse and isolate attacks from counter reformers who will have
vested interests in averting the construction of such a system.

Within a given organization there are individuals or groups who will likely welcome and
champion such an initiative, while others may oppose or actively counter the initiative. It is
important to know who the champions are and where they are located in a government. Their
support and advocacy will be crucial to the potential success and sustainability of the M&E
system.

However, if the emerging champion is located away from the centre of policy-making and has
little influence with the key decionmakers, it will be difficult although not impossible, to
envision an M&E system being used and trusted. It will be hard to ensure the viability of the
system under these circumstances. Viability is dependent upon the information being viewed as
relevant, trustworthy, usable, and timely. M&E systems with marginally placed champions who
are peripheral to the decision-making process will have a more difficult time meeting this
viability requirement.

3. What is Motivating a champion to support such an Effort?

92
Constructing a results-based M&E system is inherently political act entailing both political risks
and benefits. On the risks side producing information on government performance and
strengthening accountability are not political neutral activities. On the benefit side, champions
may find rewards and recognition at the institutional and individual levels. Champions may be
motivated by a sense of public responsibility. Champions may also find favour with parliaments,
public and private stakeholders, civil society, and the international donor community by
delivering on promises, do perceive as a reformer (a source of political capital) and
demonstrating accountability and results.

Who will own the system? Who will benefit from the system? How much information do they
really want?

Politics is not the only factor often overlooked in building M&E systems. Frequently, a careful
institutional assessment is not made – in particular, one that would reflect real capacity of the
users to actually create, utilize and sustain the system.

A carefully done readiness assessment helps provide a good understanding of how to design the
system to be responsive to the information needs of its users, determine the resources available to
build and sustain the system, and assess the capacities of those who will both produce and use
the information. Understanding these issues helps to tailor the system to the right level of
complexity and completeness.

4. How will the system directly support better resource allocation and the achievement of
program goals?

Monitoring and evaluation is not an end unto itself. It is a tool to be used to promote good
governance, modern management practices, innovation and reforms, and better accountability.
When used properly, these systems can produce information that is trustworthy, transparent, and
relevant. M&E systems can help policymakers track and improve the outcomes and impacts of
resources allocation. Most of all, they help governments and organizations make more well-
informed decisions and policies by providing continuous feedback on results.

93
Experience shows that the creation of a results-base system often works best when linked with
other public sector reforms program and initiative, such as creating medium-term public
expenditure framework, restricting public administration, or constructing a National Poverty
Reduction Strategy. Linking the creation of M&E systems to such initiatives creates
interdependencies and reinforcements that are crucial to the overall sustainability of the systems.
The readiness assessment can provide a road map for determining whether such links are
structurally and politically possible.

6. How Will the Organization, the Champions, and the Staff React to Negative Information
Generated by the M&E System?
It is difficult to have a functioning M&E system in an organizational or political climate
characterized by fear. M&E systems will inevitably (even if frequently) produce data that may be
embarrassing, politically sensitive, or detrimental to those in power. In a similar way, the
information can also be detrimental to units and individuals in an organization. (“Punishing the
messenger” it not unknown occurrence in organizations.)

If it is clear from the readiness assessment that only politically popular or “correct” information
will be allowed to emanate from the M&E systems, the system is vulnerable and compromised
from the beginning. It will not be seen as credible by those outside the organization. It will
come to be seen as hollow exercise. In such a political setting, it is important to build the
systems carefully and slowly. Funding units that will risk potentially detrimental information –
including unfavourable information about their own performance- is perhaps the best that can be
achieved. If such units are not present, there is little rationale or justification for proceeding
further to design such a system. An emphasis on traditional implementation monitoring will
have to suffice.

Governments willing to use information to make policy generally have achieved some level of
democracy and openness. But even in these countries, there is often a reluctance to measure and
monitor because of fears that the process will bring bad news to leadership and stakeholders
alike. There are real political limitations to be recognized in building such systems.

94
Not all barriers can be addressed simultaneously in the design of the system. However, not
recognizing the presence of these barriers and addressing them as soon as possible creates the
risk of a level of resistance greater and longer than may have been necessary. It is a strategic
decision as to how much time and energy should be spent on removing barriers as opposed to
using that same finite time and energy to strengthen champions and support emerging
opportunities. We strongly lean toward the latter.

7. Where does Capacity Exist to Support a Results-Based M&E system?


Performance data and information can be found in many places. The readiness assessment
provides useful guide to determining where such information and data can be found. For
instances, are there any organizational units within the government that already have monitoring
and evaluation capacity and can undertake evaluations? What data systems can be found within,
or are available to, the central and or line ministries of the government responsible for planning?
This can include budget data, output data, outcome or impact data, performance audits, financial
audits, project and program completion report, and donor data information. Outside the
government, NGOs, universities, research institutes, and training centers may also provide part
of the necessary technical capacity to support a results-based M&E system.

8. How will the M&E System Link Project, Program, sector, and National Goals?
One of the main functions of the readiness assessment is to determine the opportunities for and
risks of linking information across the government in an aligned fashion. In an ideal situation,
project level performance data would be fed into and linked to program assessments that, in turn,
would be fed into and linked to program assessments that, in turn, would be linked to sectoral,
regional, and national goals and targets. In other words, staff at each level would have a clear
“line of sight” into, or understanding about, each of the other levels and how they relate to one
another.

95
Results-based M&E at the project level that is not clearly aligned with program goals is not
useful beyond the restricted information for a given project. Information must flow freely
between levels to be truly useful. Each level must help inform the next level to achieve the
desired results. It is important, as well, to ensure that within a level, there is a commitment to
horizontally use and share that within a level, there is a commitment to horizontally use and
share information from the collection and analysis of data. The goal is to create an M&E system
that is transparent and aligned form one level to the next. Information should flow up and down
in a governmental system, rather than being collected, stored, and used at one level – but never
shared across levels. A free flow of information can help ensure that policies, programs, and
projects are linked and coordinated. Ultimately, the real question is whether the system can
address the need at every level to be both producers and consumers of results-based information

STEP 2: AGREEING ON OUTCOMES TO MONITOR AND EVALUATE

Setting goals is part of the governmental decision making process at every level. All
governments have goals –although not all have M&E. capacity. Assuming that a country or
organization is in fact in a position to move forward in building a results-based M&E systems,
the next step is to choose and agree on the outcomes (derived from the goals) to monitor and
evaluate - knowing where you are going before you get moving is key.

The Importance Outcomes


At the outset, it is important to distinguish between goals and outcomes. Goals are generally
long term, such as the MDGs that were reviewed earlier. From goals we move to outcomes,
which, in the MDG example, are of intermediate time frame (five to ten years). From outcomes
we derive Targets that are generally short-range – in the MDG context, about one to three years.
Why is it important to emphasize outcomes at this stage? Why not move directly to setting
indicators? Because establishment of outcomes will illustrate what success looks like. By
contrast, indicators are only relevant when they measure against an objective. Thus, measuring
indicators will show the progress made toward reaching the intended objectives.

96
Decision makers and stakeholders are positioned to make the intended outcomes of
governmental action as explicit as possible. One cannot set indicators before determining
outcomes because it is the outcomes – not the indicators – that will ultimately produce the
benefits. Outcomes will demonstrate whether success has been achieved. In short, outcomes
will show which road to take.
Setting outcomes is essential in building a result-based M&E system. Building the system is
basically a deductive process in which inputs, activities and outputs are derived and flow from
the setting of outcomes. Indicators, baselines, and targets (covered in subsequent chapters), all
crucial elements of the performance framework are derived from and based on the setting of
outcomes.

The Overall Process of Setting and Agreeing Upon Outcomes


You need to know where you are going, why you are going there, and how you will know when
you get there. There is a political process involved in setting and agreeing upon desired
outcomes. Each part is crucial to success of achieving stakeholder consensus with respect to
outcomes.

i. Identify Specific Stakeholders Representatives; who are the key parties involved around
an issue are (health, education, and so forth)? How are they categorized, for example,
NGOs, Government, donors? Whose interests and views are to be given priority?
ii. Identify major concerns of stakeholder groups; use information gathering techniques
such as brainstorming, focus groups, surveys, and interviews to discover the interests of
the involved groups. Numerous voices must be heard – not just the loudest, richest, or
most well-connected. People must be brought into the process to enhance and support a
democratic public sector.

97
iii. Translate Problems into Statements of Possible Outcomes Improvements; it should be
noted that formulating problems as positive outcomes is quite different from simple
reiteration of the problem. An outcome-oriented statement enables one to identify the
road and destination ahead. We encourage outcomes to be framed positively rather than
negatively. Stakeholders will respond and rally better to positive statements, for example,
we want improved health for infants and children,” rather than “We want fewer infants
and children to become ill.” Positive statements to which stakeholders can aspire seem to
carry more legitimacy. It is easier to gather a political consensus by speaking positively
to the desired outcomes of stakeholders.
iv. Disaggregate to Capture Key Desired Outcome; outcomes should be disaggregated
sufficiently to capture only one improvement area in each outcome statement.
Simplifying and distilling outcomes at this point also eliminates complications later that
when we start to build a system of indicators, baselines, and target by which to monitor
and evaluate. By disaggregating outcomes into subcomponents, we can set indicators to
measure results.
v. Develop a Plan to assess how a government or Organization Will Achieve These
Outcomes; when one monitors using the traditional implementation-based tools of inputs,
activities, and outputs; the need to be clear about outcomes is much less apparent.
Managers would gather inputs, assign activities, and wait for outputs. But the
shortcoming of this approach is that completing all of the activities and outputs is not the
same thing as achieving the desired outcomes. The sum of all activities may or may not
mean that desired outcomes have resulted. Even if all activities were completed within a
given timeframe, the desired outcome has not necessarily been achieved.

STEP 3: SELECTING KEY PERFORMANCE INDICATORS TO MONITOR


OUTCOMES
How will we know when we have achieved our desired outcomes? After examining the
importance of setting achievable and well-defined outcomes, and the issues and process involved
in agreeing upon those outcomes, we turn next to the selection of key indicators.

98
Outcomes indicators are not the same as outcomes. Indicators are quantitative or qualitative
variables that provide a simple and reliable means to measure achievement, to reflect the changes
connected to an intervention, or to help assess the performance of an organization against the
stated outcome. Indicators should be developed for all levels of the result-based M&E system’
meaning that indicators are needed to monitor progress with respect to inputs, activities,
outcomes, and goals. Progress needs to be monitored at all levels of the system to provide
feedback on areas of success and areas in which improvement may be required.

Outcome indicators help to answer two fundamental questions: “How will we know success or
achievement when we see it? Are we moving toward achieving our desired outcome?” These are
the questions that are increasingly being asked of governments and organizations across the
globe.
Developing key indicators to monitor outcomes enables managers to assess the degree to which
intended or promised outcomes are being achieved. Indicator development is core activity in
building results-based M&E system. It drives all subsequent data collection analysis, and
reporting. There are also important political and methodological considerations involved in
creating good, effective indicators.

Translating Outcomes into Outcome Indicators


When we consider measuring “result,” we mean measuring outcomes, rather than only inputs
and outputs. However, we must translate these outcomes into a set of measurable performance
indicators. It is through the regular measurement of key performance indicators that we can
determine if outcomes are being achieved.

For example, in the case of the outcome “to improve student learning,” an outcome indicator
regarding students might be the change in student scores on school achievement tests. If students
are continually improving scores on achievement tests, it is assumed that their overall learning
outcomes have also improved. Another example is the outcome “reduce at-risk behaviour of
those at high risk of contracting HIV/AIDS.” Several direct indicators might be the measurement
of different risky behaviours for those individuals most at risk.

99
STEP 4: SETTING BASELINES AND GATHERING DATA ON INDICATORS
After working through the process of selecting key performance indicators to monitor outcomes,
we turn to the establishment of baseline data that is, establishing where we are at present. It is
tricky to predict performance into the future without first establishing a baseline. The baseline is
the first measurement of an indicator. It sets current condition against which future change can
be tracked. For instance, it helps to inform decision makers about current circumstances before
embarking on projecting targets for a given program, policy, or project. In this way, the baseline
is used to learn about current recent level patterns of performance. Importantly baselines provide
the evidence by which decision makers are able to measure subsequent policy, program, or
project performance.
Building Baseline Information
There are eight key questions that should be asked in building baseline information for every
indicator. (These questions continue to apply in subsequent efforts to measure the indicator.)

1. What are the sources of data?


2. What are data collection methods?
3. Who will collect the data?
4. How often will the data be collected?
5. What is the cost and difficulty to collect the data?
6. Who will analyze the data?
7. Who will report the data?
8. Who will use the data?

STEP 5: PLANNING FOR IMPROVEMENT – SELECTING RESULTS TARGETS


After gathering baseline data on indicators, the next step is to establish results – what can be
achieved in a specific tie toward reaching the outcome. Identifying the expected and desired
level of project, program, or policy results requires the selection of specific performance target.
Target setting is the final step in building the performance framework. It, in turn, is based on
outcomes, indicators, and baselines. The reasoning process is a deductive one, flowing back
from the desired outcomes.

100
Factors to Consider When Selecting Performance Indicator Targets
One factor is taking baselines seriously. There must be a clear understanding of the baseline
starting point; for example, an average of the last three years’ performance, last year’s
performance, average trend, data over the past six months, and so forth. In other words, previous
performances should be considered in projecting new performance target. One might observe
how an organization or policy has performed over the previous few years before projecting
future performance targets.

Another consideration in setting target is the expected funding and resource levels- existing
capacity, budgets, personnel, funding resources, facilities, and the like –throughout the target
period. This can include internal funding sources as well as external funding from bilateral and
multilateral donors. Targets should be feasible given all of the resources considerations as well
as organizational capacity to deliver activities and outputs.

The political nature of the process also comes into play. Political concerns are important. What
has the government or administration promised to deliver? Citizens have voted for a particular
government based on articulated priorities and policies that need to be recognized and
legitimized in the political process. Setting target is a part of this political process, and there will
be political ramifications for either meeting process targets.

STEP 6: MONITORING FOR RESULTS


After selecting target and completing the performance-based framework, we are now ready to
use the information to monitor for results. This step describes putting together a system to get the
necessary data to better inform the decision-making process. The resulting data will provide
evidence on performance and flag any changes that may be needed for a given project, program,
or policy.

Key Types and Level of Monitoring


There are two key types of monitoring- implementation monitoring and results monitoring. Both
are important in tracking results.

101
Implementation monitoring tracks the means and strategies (that is, those inputs, activities, and
outputs found in annual or multiyear work plans) used to achieve a given outcome. These means
and strategies are supported by the use of management tools, including budgetary resources,
staffing, and activity planning. It should also be noted that there is an interaction between means
and strategies (inputs, activities and outputs) and outcomes targets. Targets are set according to
what the means and strategies potentially can yield.
Results-based monitoring is set up to observe the results achieved throughout the
project/program or policy term. Partner structures are particularly important in this context:
where possible, it should be integrated into existing structures and make use of available
potential.

STEP 7: USING EVALUATION INFORMATION TO SUPPORT RESULTS -BASED


MANAGEMENT SYSTEM.
The previous steps placed a strong emphasis on the monitoring function. Building a monitoring
system to continuously track performance is absolutely essential for managers. The monitoring
system gives ongoing information on the direction of change, the pace of change, and the
magnitude of change. It can also identify unanticipated changes. All are critical to knowing
whether policies, programs, projects are moving in the intended direction. We have also stressed
that monitoring data do not give the basis for attribution and causality for change.

These monitoring data also do not provide evidence of how changes are coming about – only
that they are or are not occurring. Likewise, monitoring data, in and of themselves, cannot
address the strengths and weakness in the design of the project, program, or policy.
Consequently, to address these and other important questions regarding the generation of
appropriate results, evaluation information is necessary

Evaluation criteria
i. Relevance
It is a measure of the extent to which development interventions meet population needs and
country priorities, and are consistent with donor policies.

102
For example, in a road project relevance could be assessed in terms of the rationale for
constructing the road: was it to serve a political agenda of the few or to exploit real economic
potential? In a sector programme to support agriculture, relevance could be assessed in terms of
domestic market responses to new crops, farmers’ responses to the various programme
initiatives, etc.
A change in society’s policies or priorities could imply that the development interventions are
now accorded lower priority, or lose some of their rationale. Once an endemic disease has been
eradicated, for instance, it could mean there is no longer any need for a special health
programme.
In other words, relevance is basically a question of usefulness; in turn, the assessment of
relevance leads to higher level decisions as to whether the development activities in question
ought to be terminated or allowed to continue. And, if the latter is the case, what changes ought
to be made, and in what direction? Are the agreed objectives still valid, and do they represent
sufficient rationale for continuing the activities?

ii. Efficiency
Efficiency is a measure of the relationship between outputs, i.e. the products or services of an
intervention, and inputs, i.e. the resources that it uses.
An output is a measure of effort; it is the immediate observable result of intervention processes
over which the managers of the intervention, i.e. the implementers, have some measure of
control. An intervention can be thought of as efficient if it uses the least costly resources that are
appropriate and available to achieve the desired outputs, i.e. deliverables, in terms of quantity
and quality.
The quality of the inputs and the outputs is an important consideration in assessing efficiency:
the most economical resource is not necessarily the most appropriate and the trade-offs between
the quantity of outputs and their qualities are a key factor of overall performance.
Furthermore, assessing the efficiency of an intervention generally requires comparing alternative
approaches to achieving the same outputs and this will be easier for some types of intervention
that for others.
In practise, the extent to which intervention activities are standardised or not, i.e. the factors of
production are well known or not, usually determines how efficiency is measured and assessed.

103
In a road building project for example, where the methods of construction are fairly well
established, a typical measure of efficiency would be the cost per km per class of road. As well,
because other projects and jurisdictions are also likely to use that same measure of efficiency,
among others, the bases for comparison and assessment, or benchmarks, are readily available in
most cases.

iii. Effectiveness
Effectiveness is a measure of the extent to which the intervention’s intended outcomes, i.e. its
specific objectives – intermediate results – have been achieved.
Explicitly, effectiveness is the relationship between the intervention’s outputs, i.e. its products or
services – its immediate results – and its outcomes, meaning usually the intended benefits for a
particular target group of beneficiaries.
As such, an intervention is considered effective when its outputs produce the desired outcomes;
it is efficient when it uses resources appropriately and economically to produce the desired
outputs.
For example, a teaching programme is considered effective if students learn, i.e. acquire intended
knowledge, skills and abilities; it is considered efficient if it provides instruction, i.e. teaching
time and materials, economically and of quality.
An efficient intervention is not necessarily effective. Teaching may be provided economically
and efficiently, but if it is not of good quality, e.g. appropriate to the needs and interests the
students, intended learning outcomes will not be achieved, i.e. it will not be effective.

iv. Impact
Impact is a measure of all significant effects of the development intervention, positive or
negative, expected or unforeseen, on its beneficiaries and other affected parties.
Whereas effectiveness focuses on the intended outcomes of an intervention, impact is a measure
of the broader consequences of the intervention such as economic, social, political, technical or
environmental effects; locally, regionally, or at the national level; on the target group and other
directly or indirectly affected parties.

104
For example an HIV/AIDS prevention and treatment programme targeting vulnerable groups
could have broader effects both positive, such as a reduction in the incidence of tuberculosis on
other groups, and negative, such as a reduction of funding to malaria prevention. Effects may
also be economic in nature, e.g. size of the workforce, political, e.g. state budget allocation, and
so on.
A broad assessment of impact is essential in a comprehensive evaluation, however there are two
central challenges to assessing impact: boundary judgment, i.e. deciding what effects to select for
consideration, and attribution, i.e. what effect is due to what.
v. Sustainability
Sustainability is a measure of whether the benefits of a development intervention are likely to
continue after external support has been completed.
While the four preceding criteria concern specific development interventions, the assessment of
sustainability addresses the effects of the development process itself over the long term.
For example, in a road construction project, sustainability can be measured in terms of whether
the road is likely to be maintained, the extent to which it will be used and provide benefits in the
future, etc. In a sector programme to support agriculture, it could be measured in terms of
financial and economic viability of the agricultural production and the supporting institutions,
the extent to which economic surplus is reinvested productively by farmers, etc.
Sustainability is in many ways a higher level test of whether or not the development intervention
has been a success. Far too many development initiatives tend to fail once the implementation
phase is over, because either the target group or the responsible parties do not have the means or
sufficient motivation to provide the resources needed for the activities to go further.
Sustainability is becoming an increasingly central theme in evaluation work since many
development agencies are putting greater emphasis on long term perspectives and on lasting
improvements.
As a result, capacity-development of communities and organisations is a common objective of
development interventions, consistent with the overall goal of promoting increased autonomy
and self-reliance of partner countries for the provision of public services.

STEP 8: REPORTING THE FINDINGS

105
Performance information is to be used as a management tool. Thus, performance information is
derived from both monitoring and evaluation. Both can provide critical, continuous, and real-
time feedback on the progress of a given project, program, or policy

Analysing and reporting performance findings are critical step because it determines what is
reported, when it is reported, and to whom it is reported. This step also has to address the current
technical capacity of the organization because it focuses on the methodological dimensions of
accumulating, assessing, and preparing analyses and report.

The Uses of Monitoring and Evaluation Finding


Monitoring and evaluation reports can play many different roles, and the information produced
can be put to very different uses:

 To demonstrate accountability – delivering on political promises made to citizenry and


other stakeholders.
 To convince – using evidence from findings
 To educate – reporting findings to help organizational learning
 To explore and investigate – seeing what works, what does not, and why
 To document – recording and creating an institutional memory
 To involve – engaging stakeholders through a participatory process
 To gain support – demonstrating results to help gain support among stakeholders
 To promote understanding – report results to enhance understanding of projects,
programs, and policies.

Evaluation reports serve many purposes. The central purpose, however, is to “deliver the
message” – inform the appropriate audiences about the findings and conclusions resulting from
the collection, analysis, and interpretation of evaluation information

Know and Target the audience

106
Know your audience and how they want to see the information expressed. The interests,
expectations, and preferred communications medium of the audience should be taken into
account. A communications strategy should be developed that will address for the following
questions:
 Who will receive that information?
 In what format?
 When?
 Who will prepare the information?
 Who will deliver the information?

Presentation of Performance Data in Clear and Understandable Form


It is important to report results data in comparison to earlier data and to the baseline.
Comparisons over time are critical. The following data can be reported:
 Expenditure or outcomes – cost of, or return on, project, program or policy.
 Raw numbers – early indicators, rough projections, estimates and so forth.
 Percentages (for example, percentage of citizens served by a project)
 Statistical tests.
 Organizational units.
 Geographical locations
 Demographics
 Client satisfaction scales – high, medium, low.
Data should be presented in a simple, clear, and easily understandable format. Only the most
important data should be presented. Acronyms and jargon should be avoided. A minimum of
background information should be provided to establish the context. Major points should be
stated up front. Findings and recommendations should be organized around key outcomes and
their indicators. A separate appendix or report can be used to convey detailed data.

STEP 9: USING THE FINDINGS

107
Using findings to improve performance is the main purpose of building results-based M&E
system. The main point of the M&E system is not simply to generate continuous results-based
information, but to get that information to the appropriate users in a timely fashion so that the
performance feedback can be used to better manage organizations and governments.

Findings can be used in a variety of concrete ways


With respect to helping formulate and justify budget requests, performance information can
inform decisions that can lead to budgetary increases – or reductions. Projects, programs, and
policies may be enhanced or expanded based on performance feedback; likewise, they may be
cut or eliminated altogether. Managers also have the option of offering incentives (monetary and
nonmonetary) to personnel for good performance or sanctions (such as poor employee or
manager performance reviews) for performance that fails to meet expectations or falls short of
intended outcomes.
In terms of motivating personnel, when civil servants are brought in as partners to the business of
government, we see better implementation. Employees throughout the system begin to
understand and become more enthusiastic about their contributions toward achievement. Of the
desired goal when they have a “line of sight” between their own actions and the goal.

M&E systems provide important feedback about the progress, as well as the success or failure, of
projects, programs, and policies throughout their respective cycles. These systems constitute a
powerful, continuous public management tool that decision-makers can use to improve
performance, and demonstrate accountability and transparency with respect to results.

Strategies for sharing information


Empower the Media
The media can be an important partner in disseminating the findings generated by results-based
M&E systems. For example, the media often report on whether governments or organizations
have actually delivered on promised projects, programs, policies, and services. The media have
also been instrumental in exposing corruption and calling for good or better governance in many
countries.

108
Enact “Freedom of Information” Legislation
Freedom of information is another powerful tool that can be used to share information with
concerned stakeholders. For example, the government of Romania enacted freedom of
information legislation recently with the stipulation that except for information that could impair
the country’s ability to protect and defend itself, anyone who asks for information about how
well the government is performing will receive it.

Institute E-Government
E-government is increasingly being used as a tool by governments around the world, and has
become a particular priority among OECD countries. E-government involves the use of
information technology to provide better accessibility, outreach, information, and services. It
represents a new electronic environment in which stakeholders can interact directly with the
government, obtain information from the government, and even transact business online.
Developing countries are moving in this direction too. The government of Jordan, for example,
is beginning its e-government initiative with the introduction of electronic procurement and
accounting.

Put Information and Internal and External Internet Sites


The use of internal (agency or government) and external Websites that include published
performance findings is yet another effective way of sharing information. Many agencies are
also developing searchable database for M&E findings.

Publish Annual Budget Reports


There is no more important way to communicate how taxpayer money is being spent than
publish the budget. Citizens will have the opportunity to “compare” the quality and level of
service being provided by the government, and the priority of that service or program in the
expenditure plan.

Engage Civil Society and citizen Group

109
Engaging civil society and citizens group also involves the inclusion of “…accountability,
advocacy and action-oriented audiences and …agreement on the information (content and form)
they need”

Strengthen Parliamentary Oversight


Strengthening parliamentary oversight is another important way to share and disseminate
information. Many parliaments have active budgets or public accounts committees in lower or
upper chambers. There are also agencies that provide parliaments with oversight, for example,
the US General Accounting Office (GAO the audit and evaluation office of the Congress, or the
National Audit Office for the Parliament in the U.K. The GAO and similar government
organizations and agencies also perform an investigative function for the parliaments they serve.
Parliaments in various countries – both developed and developing – are starting to ask for
performance information as part of their oversight function (see box 9.8). They are looking to
see that budgets are used effectively; thus, more governments are considering moving toward
programmatic budgeting.
Strengthen the Office of the Auditor General
Many countries are also finding the Office of the Auditor general to be a key partner in
determining whether governments are functioning effectively. Interestingly, as audit agencies
demand more information about how well the public sector is performing and how projects,
programs, and policies are actually being implemented, we are starting to see better
implementation.

Share and compare results findings with development Partners


Sharing and comparing results findings with development partners is also beneficial on a number
of levels. Learning from evaluation knowledge becomes wider than simply organizational
learning and also encompasses development learning. It helps to test systematically the validity,
relevance and progress of the development hypothesis.

STEP 10: SUSTAINING THE M&E SYSTEM WITHIN THE ORGANIZATION

110
An M&E systems should be regarded as a long term effort, as opposed to an episodic effort for a
short period or for the duration of a specific project, program, or policy. Sustaining such systems
within governments or organizations recognizes the long-term process involved in ensuring
utility.

Six Critical components of Sustaining Result-based M&E Systems


We will examine six critical components involved in building the sustainability of M&E
systems. Each of these dimensions needs continuous attention and care.

Demand
If demand is episodic or haphazard, results-based M&E systems are not going to be used and
sustained. Structured requirements for reporting results, including legislation, regulations, and
international development requirement can help lead to sustained, consistent demand for such
systems. Government, civil society, and donors are increasingly requiring the results that M&E
systems can best, monitor, and measure.
In any cases, demand can also be stimulated when the strategic goals of the government are
translated into results-based M&E systems, such as through National Poverty Reduction
Strategies and other initiatives. These are not simply activity-driven initiatives; rather, they try
to answer the “so what” question. What are the consequences of policy and program efforts to
reduce poverty and address the most vulnerable groups?

Clear Role and Responsibilities


Clear role and responsibilities and formal organizational and political line of authority must be
established. The organization and people who will be in charge of collecting, analyzing, and
reporting performance information must be clearly defined. Guidance is necessary. For example,
a Ministry of Finance may be responsible for administering National Poverty Reduction
Strategies or initiatives, and will need to issue directions to the sector or line ministries to collect
and report on data relevant to tracking the various outcomes specified in the strategy.

Trustworthy and credible information

111
The M&E systems must be able to produce results information that brings both good and bad
news. Performance information should be transparent and made available to all key
stakeholders. If debate of issues is not backed up by trustworthy and credible information, only
personal opinions and presumptions are left.
It should also be noted that the producers of results information need protection from political
reprasls. If bad news brings career problems to the messenger, fear will permeate the system and
the reliability of the information produced will be compromised. A quick way to undermine and
M&E system is to punish those who deliver bad news.

Accountability
No part of the government should be exempt from accountability to stakeholders. Civil society
organizations and NGOs (such as transparency International) can play a key role in encouraging
transparency and accountability, and can even help with collecting data. For example, NGOs in
Bangladesh help to collect local educational data because the capacity to collect and report on
such data is very weak within the government. The media, private sector, and parliament also
have roles to ensure that the information produced is timely, accurate, available, and addresses
government performance. It is also important not to reward failure. Accountability means that
problems should be acknowledged and addressed.

Capacity
Sound technical skills in data collection and analysis are necessary for the system’s
sustainability. Managerial skills in strategic goals setting and organizational development are
needed. Data collection and retrieval systems must be up running – and modernized.
Governments will need to commit continuing financial resources to the upkeep and management
of results-based M&E systems. Institutional experience and memory are also helpful in the long-
term sustainability of these systems.

Incentives

112
Incentives need to be introduced to encourage use of performance information. This means that
success needs to be acknowledged and rewarded, problems need to be addressed, messenger
must not be punished, organizational learning is valued, and budget saving are shared. Corrupt
or ineffective systems cannot be counted on to produce quality information and analysis and
analysis

5.6 Summary

In this topic, we have looked at the concept of result based monitoring and evaluation
and how it applies to high level decision makers. The lectured has also explored the ten
steps followed in developing a result based M&E system.
5.7 Activity

1. Reflecting on Traditional Monitoring and Evaluation processes and the Result-Based


Monitoring and Evaluation, What would you say are the differences or similarities?
Do you think Kenya has embraced all these types of M&E? If not, what might be the
constraints?

5.8 Further Reading

 Find out more on what Kusek and Rist (2012) have to say about their Ten Steps to
a Result – Based Monitoring & Evaluation.

113
5.9 Self Test Questions

 How can organizations conducting result based M&E ensure that information
reaches intended authorities?
 What should be done to sustain result-based M&E
 Illustrate how information from result-based M&E can be useful

5.10 Glossary

1. Monitoring - is a continuous function that uses the systematic collection of data on


specified indicators to provide management and the main stakeholders of an ongoing
development intervention with indications of the extent of progress and achievement of
objectives and progress in the use of allocated funds.
2. Evaluation - is the systematic and objective assessment of an ongoing or completed
project, program, or policy, including its design, implementation, and results.
3. Result – based M&E - Results-based monitoring and evaluation (M&E) is a powerful
public management tool that can be used to help policymakers and decisionmakers track
progress and demonstrate the impact of a given project, program, or policy

114
5.11 Answers to the Self-Test Questions

1. How can organizations conducting result based M&E ensure that information reaches
intended authorities?

 Engage the media in highlighting the findings e.g. The Kenya Truth, Justice and
Reconciliation Report
 Advocate for government to enact the “Freedom of Information” bill
 Use the internet – websites and social media
 Publish finding reports and share with the public
 Engage civil society and citizen group
 Work with the office of the Auditor General

2. What should be done to sustain result-based M&E

 Create demand for Result-based M&E


 Institute clear roles and responsibilities among institutions or government departments
 Produce useful/true or credible information
 Encourage accountability of public institutions
 Build capacity for Result-based M&E
 Introduce incentives to encourage use of performance information

3. Illustrate how information from result-based M&E can be useful

 It may lead to expansion or elimination of programs or policies


 Increase allocation of resources to needed sectors
 Enhance productivity and performance (lessons learnt)
 Powerful tool for decision making

115
References

Kusek, J. and Rist, R. (2012). Ten Steps to a Result Based Monitoring & Evaluation System.
Washington DC. World Bank.

LECTURE 6

EVALUATION MODELS AND TECHNIQUES

6.0 Introduction

Welcome to this lesson, we shall focus on Evaluation models and techniques. The lecture will
present several evaluation models with their unique characteristics.

6.1 Objectives

By the end of this topic you should be able to;


 Understand evaluation models and techniques
 Select the most appropriate models and techniques for programs
116
6.2 Lecture Outline

 Evaluation models/approaches
 Evaluation techniques/designs

6.3. Evaluation Models and approaches


The following models and approaches are frequently mentioned in the evaluation literature.

CIPP Model
The CIPP Model is a social systems model applied to program evaluation. It stands for; C–
Context evaluation, I– Input evaluation, P– Process evaluation, P– Product evaluation.
These components are or can be viewed as separate forms of evaluation, but they can also be
viewed as steps or stages in a comprehensive evaluation.

i. Context evaluation; includes examining and describing program context ,target


population and its needs, opportunities for addressing needs, problems underlying the
needs, whether program goals are sufficiently responsive to the assessed needs.
ii. Input evaluation includes activities such as; description of the program inputs and
resources comparison of how the program might perform compared to other programs,
evaluation of the proposed design of the program, examination of what alternative
strategies and procedures for the program should be considered and recommended In
short, this type of evaluation examines what the program plans on doing. It helps in
making program structuring decisions.

117
iii. Process evaluation includes; examining how a program is being implemented,
monitoring how the program is performing, auditing the program to make sure it is
following required legal and ethical guidelines, identifying defects in the procedural
design or in the implementation of program (Feedback responsiveness?)It is here that
evaluators provide information about what is actually occurring in the program. This
feedback can be helpful in making formative evaluation decisions (i.e., decisions about
how to modify or improve the program).
iv. Product evaluation includes; determining and examining the general and specific
outcomes of the program (i.e., which requires using impact or outcome assessment
techniques) measuring anticipated outcomes assessing the merit of the program
conducting a retrospective/backdated benefit/cost assessment (to establish the actual
worth or value of the program) Product evaluation is very helpful in making summative
evaluation decisions (e.g., What is the merit and worth of the program? Should the
program be continued?)

Behavioural Objectives Approach


This approach focuses on the degree to which the objectives of a program, product, or process
have been achieved. The major question guiding this kind of evaluation is, “Is the program,
product, or process achieving its objectives?”

The Four-Level Model


This approach is most often used to evaluate training and development programs (Kirkpatrick,
1994). It focuses on four levels of training outcomes: reactions, learning, behaviour, and results.
The major question guiding this kind of evaluation is, “What impact did the training have on
participants in terms of their reactions, learning, behaviour, and organizational results?”

Responsive Evaluation
This approach calls for evaluators to be responsive to the information needs of various audiences
or stakeholders. The major question guiding this kind of evaluation is, “What does the program
look like to different people?”

118
Improvement/Accountability-Oriented Evaluation Approaches
It stresses the need to fully assess a program’s merit and worth. It is expansive and seeks
comprehensiveness in considering the full range of questions and criteria needed to assess a
program’s value.
Often it employs the assessed needs of a program’s stakeholders as the foundational criteria for
assessing the program’s merit and worth. It also seeks to examine the full range of pertinent
technical and economic criteria for judging program plans and operations. It looks for all
relevant outcomes, not just those keyed to program objectives.

Goal-Free Evaluation
This approach focuses on the actual outcomes rather than the intended outcomes of a program.
Thus, the evaluator has minimal contact with the program managers and staff and is unaware of
the program’s stated goals and objectives. The major question addressed in this kind of
evaluation is, “What are all the effects of the program, including any side effects?”

Adversary/Judicial Approaches
These approaches adapt the legal paradigm to program evaluation. Thus, two teams of evaluators
representing two views of the program’s effects argue their cases based on the evidence (data)
collected. Then, a judge or a panel of judges decides which side has made a better case and
makes a ruling. The question this type of evaluation addresses is, “What are the arguments for
and against the program?”

Consumer-Oriented Approaches
The emphasis of this approach is to help consumers choose among competing programs or
products. Consumer Reports provides an example of this type of evaluation. The major question
addressed by this evaluation is, “Would an educated consumer choose this program or product?”

Expertise/Accreditation Approaches

119
The accreditation model relies on expert opinion to determine the quality of programs. The
purpose is to provide professional judgments of quality. The question addressed in this kind of
evaluation is, “How would professionals rate this program?”

Utilization-Focused Evaluation
According to Patton, (1997), “utilization focused program evaluation is evaluation done for and
with specific, intended primary users for specific, intended uses”. As such, it assumes that
stakeholders will have a high degree of involvement in many, if not all, phases of the evaluation.
The major question being addressed is, “What are the information needs of stakeholders, and
how will they use the findings?”

Participatory/Collaborative Evaluation
The emphasis of participatory/ collaborative forms of evaluation is engaging stakeholders in the
evaluation process, so they may better understand evaluation and the program being evaluated
and ultimately use the evaluation findings for decision-making purposes. As with utilization-
focused evaluation, the major focusing question is, “What are the information needs of those
closest to the program?”

Empowerment Evaluation
This approach, as defined by Fetterman (2001), is the “use of evaluation concepts, techniques,
and findings to foster improvement and self-determination”. The major question characterizing
this approach is, “What are the information needs to foster improvement and self-
determination?”

Organizational Learning
Some evaluators envision evaluation as a catalyst for learning in the. Thus, evaluation can be
viewed as a social activity in which evaluation issues are constructed by and acted on by
organization members. This approach views evaluation as ongoing and integrated into all work
practices. The major question in this case is, “What are the information and learning needs of
individuals, teams, and the organization in general?”

120
Theory-Driven Evaluation
This approach to evaluation focuses on theoretical rather than methodological issues. The basic
idea is to use the “program’s rationale or theory as the basis of an evaluation to understand the
program’s development and impact”. By developing a plausible model of how the program is
supposed to work, the evaluator can consider social science theories related to the program as
well as program resources, activities, processes, and outcomes and assumptions. The major
focusing questions here are, “How is the program supposed to work? What are the assumptions
underlying the program’s development and implementation?”

Success Case Method


This approach to evaluation focuses on the practicalities of defining successful outcomes and
success cases and uses some of the processes from theory-driven evaluation to determine the
linkages, which may take the form of a logic model, an impact model, or a results map.
Evaluators using this approach gather stories within the organization to determine what is
happening and what is being achieved. The major question this approach asks is, “What is really
happening?”

6.4 Evaluation techniques/designs


An evaluation design refers to the overarching methodological framework that guides an
evaluation effort; in other words, it is the conceptual lens through which the evaluation is viewed
and implemented. Evaluation designs that collect quantitative data fall into one of three
categories namely Pre-experimental, Quasi-experimental and True experimental designs.

The following are brief descriptions of the most commonly used evaluation designs:

121
One-Shot Design; in using this design, the evaluator gathers data following an intervention or
program. For example, a survey of participants might be administered after they complete a
workshop.

Retrospective Pretest; as with the one-shot design, the evaluator collects data at one time but
asks for recall of behaviours or conditions prior to, as well as after, the intervention or program.

One-Group Pretest-Post-test Design; the evaluator gathers data prior to and following the
intervention or program being evaluated.

Time Series Design; the evaluator gathers data prior to, during, and after the implementation of
an intervention or program.

Pretest-post-test Control-Group Design; the evaluator gathers data on two separate groups prior
to and following an intervention or program. One group typically called the experimental or
treatment group, receives the intervention. The other group, called the control group, does not
receive the intervention.

Post-test-Only Control-Group Design; the evaluator collects data from two separate groups
following an intervention or program. One group, typically called the experimental or treatment
group, receives the intervention or program, while the other group, typically called the control
group, does not receive the intervention. Data are collected from both of these groups only after
the intervention.

122
Case Study Design; when evaluations are conducted for the purpose of understanding the
program’s context, participants’ perspectives, the inner dynamics of situations, and questions
related to participants’ experiences, and where generalization is not a goal, a case study design,
with an emphasis on the collection of qualitative data, might be most appropriate. Case studies
involve in-depth descriptive data collection and analysis of individuals, groups, systems,
processes, or organizations. In particular, the case study design is most useful when you want to
answer how and why questions and when there is a need to understand the particulars,
uniqueness, and diversity of the case.

6.5 Summary

The lesson has described different evaluation approaches, models and design.

1. Compare and contrast different evaluation models and approaches. Decide which ones
are applicable to the Kenyan environment and why?

6.6 Activity

6.7 Further Reading

 A variety of different evaluation models are presented in the evaluation literature.

123
Research on other evaluation models, approaches and design not included in this
lecture.

6.8 Self Test Questions

 List and describe two commonly used evaluation models/approaches

6. 9 Answers to the Self-Test Questions

Name and explain commonly used evaluation models/approaches

 Behavioural objectives approach; this approach focuses on the degree to which the
objectives of a program, product, or process have been achieved. The major question
guiding this kind of evaluation is, “Is the program, product, or process achieving its
objectives?”
 Goal-Free Evaluation; this approach focuses on the actual outcomes rather than the
intended outcomes of a program. Thus, the evaluator has minimal contact with the
program managers and staff and is unaware of the program’s stated goals and objectives.
The major question addressed in this kind of evaluation is, “What are all the effects of the
program, including any side effects?”
 Consumer-Oriented Approaches; the emphasis of this approach is to help consumers
choose among competing programs or products. Consumer Reports provides an example
of this type of evaluation. The major question addressed by this evaluation is, “Would an
educated consumer choose this program or product?”
 Participatory/Collaborative Evaluation; the emphasis of participatory/ collaborative forms
of evaluation is engaging stakeholders in the evaluation process, so they may better
understand evaluation and the program being evaluated and ultimately use the evaluation

124
findings for decision-making purposes. As with utilization-focused evaluation, the major
focusing question is, “What are the information needs of those closest to the program?

References

1. International Federation of Red Cross and Red Crescent Societies. (2011).


Project/programme monitoring and evaluation (M&E) guide. Geneva. IFRC
& RCS.
2. International Labour Organisation (2010). Project Design Manual: A step-by-step tool
to support the development of cooperatives and other forms of self-help
organizations. Accessed from www.ilo.org/coop
3. Preskill, H. and Russ-Eft, D. (2005). Building Evaluation Capacity: 72 Activities for
Teaching and Training. United State. SAGE Publications.

LECTURE 7

APPLICATION OF MONITORING AND EVALUATION PRINCIPLES

125
7.0 Introduction

Welcome to the last topic of this course, we shall learn about monitoring and evaluation guiding
principles and how they can be observed.
Application of Monitoring and Evaluation Principles

7.1 Objectives

By the end of this topic you should be able to;


 Understand the principles guiding monitoring and evaluation
 Apply monitoring and evaluation within suggested guidelines

7.2 Lecture Outline

 Monitoring and evaluation guiding principles


 Application of Principles to M&E processes

7.3. Monitoring and evaluation guiding principles

The United Nation has proposed some principles that can guide the process of monitoring and
evaluation. These include:
1. Monitoring and evaluation should refer back to existing baseline data or begin with a
baseline study
2. Monitoring is an integral part of programme implementation
3. Monitoring should be linked to the specific programme objectives that were defined
through the appraisal and programme planning process

126
4. Monitoring information should be used in decision-making
5. Evaluation should follow a specific methodology designed to gather information about
programme success
6. Monitoring and evaluation should respect principles of participation and involve all
programme stakeholders, including programme implementers, beneficiaries (programme
users), women and girl survivors when appropriate and safe, local officials, and other
observers such as related civil society groups
7. Monitoring and evaluation should respect and protect the rights, welfare, and
confidentiality of all those involved in the programme.

7.4 Application of Principles to M&E processes

This section will highlight how organizations and program managers can ensure that monitoring
and evaluation processes are following the above stated guidelines. These may include creation
of both laws, policies, rules, opportunities and products to enhance compliance with stated
guidelines.

These may include:


Monitoring and evaluation should refer back to existing baseline data or begin with a baseline
study

 Organizations should plan for collection of data before the program to ascertain the
situation before their program in order to measure progress effectively.
 They can also use already existing relevant information about the current situation from
government or other organizations.

127
Monitoring is an integral part of programme implementation
 As discussed in previous lectures, monitoring is a continuous activity carried out to track
the progress of program activities. Therefore to ensure that data is collected regularly,
program managers need to assign responsibilities to program staff and ensure that
available resources (financial and material) is available to support the people tasked with
monitoring activities.

Monitoring should be linked to the specific programme objectives that were defined through
the appraisal and programme planning process
 Again we have seen that monitoring and evaluation systems are guided by the contents
within the logical frame which also includes goals, objectives, outputs and activities.
Therefore, those monitoring and evaluating should be keen on what the program is all
about (what does it want to contribute to and how is it doing so) and this involves using
objectives and activities from the log frame to determine whether whatever was planned
is being achieved.
Monitoring information should be used in decision-making
 Monitoring information should be able to reach program managers and other decision
makers in order to make changes to on-going activities. Therefore, an organization needs
to put in place an effective communication system that can allow flow of information
from monitoring personnel to decision makers and vice versa.
 Decision makers should be effective in analysing and using the information to make
necessary changes to on-going activities. Some program managers might get information
but be reluctant to act on it thus the information not helping very much.

Evaluation should follow a specific methodology designed to gather information about


programme success
 The organization and or program manager should be able to choose an appropriate
evaluation method or model based on cost, time and capacity.
 Having recognized evaluation methodology will improve the credibility of their results or
information.

128
 Organizations can either use available evaluation models or develop one according to
their needs.
 Organizations who do not have capacity to develop their own evaluation model may hire
external evaluators to do so for them.

Monitoring and evaluation should respect principles of participation


 Organization and program managers should involve all stakeholders of the program in
planning, monitoring and evaluation processes
 Develop monitoring and evaluation systems that meet the interests of different
stakeholders within the program
 Agree on what success means for various stakeholders in the program
 Build monitoring and evaluation capacity of stakeholders

Monitoring and evaluation should respect and protect the rights, welfare, and confidentiality
of all those involved in the programme.
 Monitoring and evaluation should be ethical and maintain confidentiality of those
participating in such processes.
 Data collection instruments should be anonymous
 No respondent should be referred by their true name, conceal their identities to protect
them
 The information generated should only be used the purpose of improving the program
and not belittling participants

7.5 Summary

The lesson has highlighted principles of conducting monitoring and evaluation. We have
also discussed ways program managers can ensure that these principles are followed to the
latter when conducting monitoring and evaluation.

129
7.6 Activity

1. Reflecting on the monitoring and evaluation approaches by Non-profit organizations


in Kenya, do you think they uphold the M&E guiding principles stated in this lecture.

7.7 Further Reading

 Find out other ways organizations and program managers can ensure they meet
the UN M&E principles

7.8 Self Test Questions

 State strategies that can encourage participatory monitoring and evaluation

7. 9 Answers to the Self-Test Questions


1. State strategies that can encourage participatory monitoring and evaluation
 Organization and program managers should involving all stakeholders of the program in
planning, monitoring and evaluation processes
 Developing monitoring and evaluation systems that meet the interests of different
stakeholders within the program

130
 Agreeing on what success means for various stakeholders in the program
 Building monitoring and evaluation capacity of stakeholders

References
1. International Federation of Red Cross and Red Crescent Societies. (2011).
Project/programme monitoring and evaluation (M&E) guide. Geneva. IFRC
& RCS.
2. Jeffrey, C. 2012). Program Monitoring & Visualization: An exploratory Approach.
United States. Springer.
3. Levin, G. (2012). Program Management: A Life Cycle Approach. London. Taylor &
Francis.
4. Martens, D. and Wilson, A. (2012). Program Evaluation: Theory and Practice: A
Comprehensive Guide. New York. Guilford Press.
5. Thyer, B. and Padgett, D. (2010). Program Evaluation: An Introduction, 5th ed. USA.

Wadsworth

131
MODULE REFERENCES

6. Chikati, J. (2009). Monitoring and Evaluation Handbook. Nairobi. Regional


Partnership for Resource Development Publication..
7. Community Places. (2014). Community Planning toolkit: community engagement. United
States. Community Places.
8. International Federation of Red Cross and Red Crescent Societies. (2011).
Project/programme monitoring and evaluation (M&E) guide. Geneva. IFRC
& RCS.
9. International Labour Organisation (2010). Project Design Manual: A step-by-step tool
to support the development of cooperatives and other forms of self-help
organizations. Accessed from www.ilo.org/coop
10. Pathfinder International. (2009). Program Design and Management. United States.
Pathfinder.
11. International Labour Organisation (2010). Project Design Manual: A step-by-step tool
to support the development of cooperatives and other forms of self-help
organizations. Accessed from www.ilo.org/coop
12. Levin, G. (2012). Program Management: A Life Cycle Approach. London. Taylor &
Francis.
13. Mikov, M., Beale, W. and Tua, J. (2007). Work plans and Budgets. Australia. WWF.
Retrieved https://intranet.panda.org/documents/folder.cfm?uFolderID=60981
14. Pathfinder International. (2009). Program Design and Management. United States.
Pathfinder.
15. Thiry, M. (2012). Program Management: Fundamentals of Project Management.
England. Gower Publishing Limited.
16. Jeffrey, C. 2012). Program Monitoring & Visualization: An exploratory Approach.
United States. Springer.
17. Levin, G. (2012). Program Management: A Life Cycle Approach. London. Taylor &
Francis.

132
18. International Labour Organisation (2010). Project Design Manual: A step-by-step tool
to support the development of cooperatives and other forms of self-help
organizations. Accessed from www.ilo.org/coop
19. Preskill, H. and Russ-Eft, D. (2005). Building Evaluation Capacity: 72 Activities for
Teaching and Training. United State. SAGE Publications.
20. Martens, D. and Wilson, A. (2012). Program Evaluation: Theory and Practice: A
Comprehensive Guide. New York. Guilford Press.
21. Kusek, J. and Rist, R. (2012). Ten Steps to a Result Based Monitoring & Evaluation
System. Washington DC. World Bank.

133
134

You might also like