0 ratings 0% found this document useful (0 votes) 92 views 26 pages Project Monitoring and Evaluation
The document provides an overview of Monitoring and Evaluation (M&E), defining monitoring as the continuous collection and analysis of information to track project progress, while evaluation is a periodic assessment of project effectiveness and impact. It emphasizes the importance of M&E for project management, accountability, and organizational learning, and outlines key characteristics, benefits, and types of indicators used in M&E. Additionally, it discusses frameworks for evaluation, including logical frameworks, and evaluation criteria to assess project relevance, effectiveness, efficiency, impact, and sustainability.
AI-enhanced title and description
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here .
Available Formats
Download as PDF or read online on Scribd
Go to previous items Go to next items
Save project monitoring and evaluation For Later MONITORING & EVALUATION- LECTURE NOTES
SESSION 1: OVERVIEW OF MONITORING AND EVALUATION
@ What is Monitoring and Evaluation
‘+ Monitoring and Evaluation is a process of continued gathering of information and its
analysis, in order to determine whether progress is being made towards pre-specified goals
and objectives, and highlight whether there are any unintended (positive or negative)
effects from a project/programme and its activities.
(i) What is a Monitoring?
‘+ Monitoring is a continuous process of collecting, analyzing, documenting, and reporting
information on progress to achieve set project objectives. It helps identify trends and
patterns, adapt strategies and inform decisions for project or programme management.
(ii) What is Evaluation?
‘+ Evaluation is a periodic assessment, as systematic and objective as possible, of an on-
going or completed project, programme or policy, its design, implementation and
results. It involves gathering, analysing, interpreting and reporting information based
on credible data. The aim is to determine the relevance and fulfilment of objectives,
developmental efficiency, effectiveness, impact and sustainability.
(iv) Purpose/Importance of Monitoring and Evaluation
‘Timely and reliable M&E provides information to:
+ Support project/programme implementation with accurate, evidence-based reporting
that informs management and decision-making to guide and improve project /programme
performance.
© Contribute to organizational Iearning and knowledge sharing by reflecting upon and
sharing experiences and lessons.
© Uphold accountability and compliance by demonstrating whether or not our work has
been carried out as agreed and in compliance with established standards and with any
other stakeholder requirements
+ Provide opportunities for stakeholder feedback,.
+ Promote and celebrate project/program work by highlighting accomplishments and
achievements, building morale and contributing to resource mobilization.
‘© Strategic management in provision of information to inform setting and adjustment of
objectives and strategies.
© Build the capacity, self-reliance and confidence stakeholders, especially beneficiaries and
implementing staff and partners to effectively initiate and implement development
initiatives.
¥) Characteristics of monitoring and evaluation
Monitoring tracks changes in program performance or key outcomes over time, It has the
following characteristics:
+ Conducted continuously
Keeps track and maintains oversight
Documents and analyzes progress against planned program activities
Focuses on program inputs, activities and outputs
Looks at processes of program implementation
Considers program resulls at output level
Considers continued relevance of program activities to resolving the health problem
Reports on program activities that have been implemented
Reports on immediate results that have been achieved
Evaluation is a systematic approach to attribute changes in specific outcomes to program
activities. It has the following characteristics:
+ Conducted at important program milestones
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 1Provides in-depth analysis
Compares planned with actual achievements
Looks at processes used to achieve results
Considers results at outcome level and in relation to cost
Considers overall relevance of program activities for resolving health problems
References implemented activities
Reports on how and why results were achieved
Contributes to building theories and models for change
Attributes program inputs and outputs to observed changes in program outcomes
and/or impact
(v) Key benefits of Monitoring and Evaluation
a. Provide regular feedback on project performance and show any need for ‘mid-
course’ corrections
b, Identify problems early and propose solutions
€. Monitor access fo project services and outcomes by the target population;
d, Evaluate achievement of project objectives, enabling the tracking of progress
towards achievement of the desired goals
€. Incorporate stakeholder views and promote participation, ownership and
accountability
£. Improve project and programme design through feedback provided from baseline,
mid-term, terminal and ex-post evaluations
Inform and influence organizations through analysis of the outcomes and impact
of interventions, and the strengths and weaknesses of their implementation,
enabling development of a knowledge base of the types of interventions that are
successful (i.e, what works, what does not and why.
h. Provide the evidence basis for building consensus between stakeholders
SESSIONS 2 & 3 SELECTING INDICATORS, BASELINES AND TARGETS
a) The indicator: “An indicator is defined as a quantitative measurement of an objective to be
achieved, a resource mobilised, an output accomplished, an effect obtained or a context
variable (economic, social or environmental)”. precise information needed to assess
whether intended changes have occurred. Indicators can be either quantitative (numeric)
or qualitative (descriptive observations). Indicators are typically taken directly from the
logframe, but should be checked in the process to ensure they are SMART (specific,
measurable, achievable, relevant and time-bound).
b) The Indicator definition- key terms in the indicator that need further detail for precise and.
reliable measurement.
©) The methods/sources- identities sources of information and data collection methods and
tools, such as the use of secondary data, regular monitoring or periodic evaluation,
baseline or endline surveys, and interviews.
d) The frequency/schedules -how often the data for each indicator will be collected, such
as weekly, monthly, quarterly, annually, etc.
©) The person(s) responsible- lists the people responsible and accountable for the data
collection and analysis, ¢.2. community volunteers, field staff, project/ programme
managers, local partner (3) and external consultants,
1) The information use/audience - identifies the primary use of the information and its
intended audience. Some examples of information use for indicators include:
+ Monitoring project/programme implementation for decision-making
* Evaluating impact to justify intervention
* Identifying lessons for organizational learning and knowledge-sharing
+ Assessing compliance with donor or legal requirements
+ Reporting to senior management, policy-makers or donors for strategic planning
* Accountability to beneficiaries, donors and partners
mobilization
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 2Y Context indicators which measure an economic, social or environmental variable
concerning an entire region, sector, or group and the Project location, as well as relevant
national and regional policies and programs.. The situation before the project starts, the
(baseline) data, primarily from official statistics,
Y Input indicators include indicators that measure the human and financial resourc
physical facilities, equipment and supplies that enable implementation of a program
¥ Process indicators reflect whether a program is being carried out as planned and how
well program activities are being carried out.
Y Output indicators which relate to activities, measured in physical or monetary units
results of program efforts (inputs and processes/activities) at the program level
Y Outcome indicators measure the program’s level of success in improving service
accessibility, utilization or quality.
Y Result indicators- direct and immediate effect arising from the project activities that
provide information on changes of the direct project beneficiaries.
Y Impact indicators refer to the the long-term, cumulative effects of programs over time,
beyond the immediate and direct effects on beneficiaries
v indicators are those that cover factors outside the control of the project but
which might affect its outcome.
Y Proxy indicators — an indirect way to measure the subject of interes
h) Characteristics of Good Indicators.
a) Specific ~ focused and clear
») Measurable - quantifiable and reflecting change
©) Altainable - reasonable in scope and achievable within set time-frame
® Relevant - pertinent to the review of performance
©) Time-Bound/Trackable - progress can be charted chronologically
Also be CREAM: Clear, Relevant, Economical, Adequate and Monitor-able.
i) Baselines and Targets
* Abaseline is qualitative or quantitative information that provides data at the
beginning of, or just prior fo, the implementation of an intervention.
‘+ Targets are established for each indicator by starting from the baseline level, and by
including the desired level of improvement in that indicator
SESSION 4: FRAMEWORKS FOR EVALUATION - THE LOGICAL FRAMEWORK APPROACH (LFA)
Four types of frameworks dominate the M&E field:
a) Conceptual frameworks are also known as theoretical or causal frameworks,
b) Results-based frameworks are also known as strategic frameworks and serve as a
management tool with an emphasis on results. The purpose of results frameworks is to
increase focus, select strategies, and allocate resources accordingly.
‘The higher-order objective To which a development interven x intended Ts contrbUe
‘Outeonme | The Hike or achieved shor-term andl medium-term eft ofan nlervention s outputs
‘Output | The products, capital goods and services which vent froma development intervention; Way al
include change resuling from the intervention which ar elvan a the achicrement of eulcomcs
“Retivily —] Actions taken or work performed through which inpus, sack a fan, lchnical assislance and other
{ypes of resounes are mobilized to produce specie outpuls
Tapas [The nancial, aman, and material resources wed forthe AIOE EEN
©) Logical frameworks are also known as LogFRAMES and are commonly used to help set
clear program objectives and define indicators of success. They also outline the critical
assumptions on which a project is based, similar to the results framework,
Logic models are also known as MSE. frameworks are commonly used to present a clear
plan for the use of resources to meet the desired goals and objectives. They are a useful
tool for presenting programmatic and evaluation components.
‘The choice of a particular type of framework—whether a conceptual framework, results,
framework, logical framework or logic model—depends on the programs specific needs, the
MAE tcam’s preferences and donor requirements.
In particular, the LEA is a systematic planning procedure for complete proje
management, a participatory Planning, Monitoring & Evaluation tool;
eyele
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 3Aconcise summary
KS888
purpose and goal.
of the Programme;
A tool for planning a logical set of interventions;
A tool for appraising a Programme document;
A tool for monitoring progress made with regard to delivery of outputs and activities;
A tool for evaluating impact of Programme outputs, e.g. progress in achieving
Nanative summary
A:sapshct of the different levels of
the project abjoctves— known asthe
"hierarchy of objetives”
‘Oijectively verifiable
indicators (OVD ~ How
wall we know weve Been
succesful?
‘Means of veiication | Aasunplions/ Rak Whar
MOY) ow nail we | assumptions —underic the
check our reported | structure of our project and
resule? What i the rick they will not
rv?
‘Goal Timpact) Tanger orm
‘eflecte/Generalor overall objective
Purpose- ~ wy arw we doing this?
dine and immediate
‘efhets/objectives/ Outcomes Results
‘Ouipats - what are the
deliverables? good and
services produced/operational
objecives
‘Aetivites- what tasks wil we
undertake to deliver the outputs?
Tapa
By what means do we
‘rou the aetvies
Tat
What dos it cost
Pre-conditons
What needs to be fulfilled
before activities can start
SESSION 5a: MONITORING CRITERIA
4) Project monitoring & control cycle.
‘To achieve effective control over project implementation, it is necessary to assess the progress
from time at regular intervals in terms of physical completion of scheduled activities, actual cost
incurred in performing those activities ad achievement of desired performance levels by
comparing the status with the plans to find deviations. This assessment process is known as
‘monitoring’.
PLAN
REVISED SCHEDULES @UDGETS
‘COMPARE
ACTUAL
‘STATUS
‘VARIANCES:
no} YES.
ESTIMATES TO COMPLETE
——————_ [ACTION PLAN
Key elements of project monitoring and control
¥” Project Status reporting
Y Conducting a project review with stakeholders
Y Controlling schedule variances
Y Controlling scope and change requests
v
v
Controlling budget
Tracking and mitigating
risks
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 4b)_ Types of monitoring
A project/programme usually monitors a variety of things according to its specific informational
needs, These monitoring types often occur simultaneously as part of an overall monitoring
system commonly found in a project/programme monitoring system,
TAB 1: Common types of monitoring
Results monitoring: Tracks effects and impacts to determine if the project/programme is
on target towards its intended results (inputs, activity, outputs, outcomes, impact,
assumptions/risks monitoring) and whether there may be any unintended impact (positive
or negative
‘© Process (activity) monitoring : Tracks the use of inputs and resources, the progress of
activities, how activities are delivered — the efficiency in time and resources and the
delivery of outputs
© Compliance monitoring: Ensures compliance with, say, donor regulations and expected
results, grant and contract requirements, local governmental regulations and laws, and
ethical standards.
© Context (situation) monitoring: Tracks the setting in which the project/programme.
operates, especially as it affects identified risks and assumptions, and any unexpected
considerations that may arise, including the larger political, institutional, funding, and
policy context that affect the project/ programme.
‘© Beneficiary monitoring: ‘Tracks beneficiary perceptions of a project/programme. It
includes beneficiary satisfaction or complaints with the project/programme, including
their participation, treatment, access to resources and their overall experience of change.
© Financial monitoring: Accounts for costs by input and activity within predefined categories
of expenditure, to ensure implementation is according to the budget and time frame.
© Organizational monitoring: Tracks the sustainability, institutional development and
capacity building in the project/programme and with its partners,
© Monitoring Questions and the LogFrame
Logframe objectives Monitoring questions
‘Measuring changes at goal-levelrequios a longer time frame,
— and is thorofore dealt with by evaluation and nt monitoring.
[Are outputs leading to achievement
cl the outcomes?
| How do boncficaris fel about tho work? Ee ceeeea
day orcnxpected
rose?
Te acts lading to the expected ett?
% e \sthere anything
© Treasintstargimonnadonsdaaie || Mepanra ta
[an hin badge? ‘medi the operation
implementation plan?
‘ra finance, porsonnol and materials avalabio
‘on time and in tho right quanttios and quality?
‘Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page SSESSION
5 b. EVALUATION CRITERIA FOR PROJECTS
a) Five Part Evaluation Criteria
Relevance - Was/is the project a good idea given the situation to improve? Was the
logic of the project correct? Why or Why Not? ~The validity of the Overall Goal and
Project Purpose at the evaluation stage.
Effectiveness - Have the planned results been achieved? Why or Why Not?
to which the Project Purpose has been achieved by the project Outputs.
Bificiency - Have resources been used in the best possible way? Why or Why Not? “The
productivity in project implementation. The degree to which Inputs have been
converted into Outputs.
Impact- To what extent has the project contributed towards its longer term goals? Why
‘or Why Not? Have there been any unanticipated positive or negative consequences of
the project? Why did they arise? -Positive and negative changes produced, directly or
indirectly, as a result of the Implementation of the project,
Sustainability — Can the outcomes be sustained after the project funding to ensure
continued impacts? Why or Why Not? -The durability of the benefits and development
effects produced by the project after its completion.
‘The degree
Impacts,
Reals
i
On
us
Objectives ri Inputs }—>| Operations inal
recwanees If LD sruiener
Precortscs
Utility and Sustainabil
b) Evaluation Questions and the LogFrame
Logframe objectives Evaluation questions
Pimpact Sustainability
«What changes che project * Are the ene aly to be maintained
bring about? foren tended period after
+ Ware there any unlannasi —_aasistanas snd?
‘runatanced ononges?
| Ettectiveness Relevance
+ Wrath operten's + Vlore the operation's objotives
bjctves achieved? ‘onsite ith bens
+ Did the stp ead heeds and with Red Grose Red
‘othe intended outcomes? Crescent palsies?
[ Eticiency
+ Were sfocks of tems avaiable on tie and
inthe right quantities and qualiy?
+ Were actuties implementa! cr schedule anc within basa?
+ Wore outputs defvered econemcaly?
‘Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 6SESSION 6a. TYPES OF EVALUATION
‘Three ways of classifying:
> When it is done - Ex-ante evaluation; Formative evaluation; Summative — end of
project, and Ex-Post evaluation.
> Who is doing it - External evaluation; Internal evaluation or self-assessment
> What methodology or technicality is used- Real-time evaluations (RTEs); Meta-
evaluations; Thematic evaluations; Cluster/sector evaluations; Impact evaluations
The details are as follows: -
a) Ex-ante evaluation: Conducted before the implementation of a project as part of the
planning. Needs assessment determines who needs the program, how great the need
is, and what might work to meet the need. Zmplementation(feasibility)evaluation
monitors the fidelity of the program or technology delivery, and whether or not the
program is realistically feasible within the programmatic constraints
b) Formative evaluation: Conducted during the implementation of the project. Used to
determine the efficiency and effectiveness of the implementation process, fo improve
performance and assess compliance. Provides information to improve processes and
earn lessons. Process evaluation investigates the process of delivering the program or
technology, including alternative delivery procedures. Outcome evaluations
investigate whether the program or technology caused demonstrable effects on
specifically defined target outcomes. Cost-effectiveness and cost-benefit analysis
address questions of efficiency by standardizing outcomes in terms of their dollar
costs and values
©) Midterm evaluations are formative in purpose and occur midway through
implementation,
4) Summative evaluation: Conducted at the end of the project to assess state of project
implementation and achievements at the end of the project. Collate lessons on content
and implementation process. Occur at the end of project/ programme implementation
to assess effectiveness and impact.
©) Ex-post evaluation: Conducted after the project is completed. Used to assess
sustainability of project effects, impacts. Identifies factors of success to inform other
projects. Conducted sometime after implementation to assess long-term impact and
sustainability
1) External evaluation: Initiated and controlled by the donor as part of contractual
agreement, Conducted by independent people — who are not involved in
implementation. Often guided by project staff
2) Internal or self-assessment: Internally guided reflective processes. Initiated and
controlled by the group for ifs own learning and improvement, Sometimes done by
consultants who are outsiders fo the project. Need to clarify ownership of information
before the review starts
h) Real-time evaluations (RTE): are undertaken during _project/programme
implementation to provide immediate feedback for modifications to improve on-going
implementation
i) Meta-evaluations. are used to assess the evaluation process itself. Some key uses of
‘meta-evaluations include: take inventory of evaluations to inform the selection of
future evaluations; combine evaluation results; check compliance with evaluation
policy and good practices; assess how well evaluations are disseminated and utilized
for organizational learning and change, etc.
4) Thematic evaluations: focus on one theme, such as gender or environment, typically
across a number of projects, programmes or the whole organization,
k) Cluster/sector evaluations: focus on a set of related activities, projects or programmes,
typically across sites and implemented by multiple organizations
D- Impact evaluations: is broader and assesses the overall or net effects -- intended or
unintended -- of the program or technology as a whole focus on the effect of a
project/programme, rather than on its management and delivery. Therefore, they
typically occur after project/programme completion during a final evaluation or an
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page?ex-post evaluation. However, impact may be measured during project/programme
implementation during longer projects/ programmes and when feasible
SESSION 6b: EVALUATION MODELS AND APPROACHES
Behavioral Objectives Approach. -“Is the program, product, or process achieving its
objectives?” Focuses on the degree to which the objectives of a program, product, or
process have been achieved
‘The Four-Level Model or Kirkpatrick Model -\What impact did the training have on
participants in terms of their reactions, learning, behavior, and organizational results?”
Often used to evaluate training and development programs and focuses on four levels of
training outcomes: reactions, learning, behavior, and results,
© Reaction - Measures trainees’ valuable experience, feel good about the instructor,
the topic, the material, its presentation, and the venue.
© Teaming: How much has their knowledge increased as a result of the training?
© Behavior-trainees have changed their behavior, based on the training they received.
© Results- good for business, good for the employees, or good for the bottom line.
if Models-“‘What management decisions are required concerning the
program”. The evaluator’s job is to provide information to management to help them in
making decisions about programs, products, etc. Daniel Stufflebeam’s CIPP Mode! has
been very popular. CIPP stands for context evaluation, input evaluation, process
evaluation, and product evaluation. Context evaluation includes examining and
describing the context of the program you are evaluating, conducting a needs and goals
assessment, determining the objectives of the program, and determining whether the
proposed objectives willbe sufficiently responsive to the identified needs. It helps in
making program planning decisions. Input evaluation includes activities such as a
description of the program inputs and resources, a comparison of how the program
might perform compared to other programs, a prospective benefit/cost assessment (i.c.,
decide whether you think the benefits will outweigh the costs of the program, before the
program is actually implemented), an evaluation of the proposed design of the program,
and an examination of what alternative strategies and procedures for the program
should be considered and recommended. Process evaluation includes examining how a
program is being implemented, monitoring how the program is performing, auditing the
program to make sure itis following required legal and ethical guidelines, and
identifying defects in the procedural design or in the implementation of the program.
Product evaluation includes determining and examining the general and specitic
anticipated and unanticipated outcomes of the program (i.e., which requires using,
impact or outcome assessment techniques)
Responsive Evaluation. “What does the program look like to different people?” - Calls for
evaluators to be responsive to the information needs of various audiences or stakeholders.
Goal-Free Evaluation. “What are all the effects of the program, including any side
effects?”-Fouses on the actual outcomes rather than the intended outcomes of a program.
Thus, the evaluator is unaware of the program’s stated goals and objectives.
Adversary /Jadicial Approaches. “What are the arguments for and against the program?”
These Adopts the legal paradigm to program evaluation, where two teams of evaluators
representing two views of the program’s effects argue their cases based on the evidence
(data) collected. Then, a judge or a panel of judges decides which side has made a better
case and makes a ruling,
Consumer-Oriented Approaches. “Would an educated consumer choose this program or
product?”- helps consumers to choose among competing programs or products.
Expertise Accreditation Approaches. “tow would professionals rate this progran?”- The
accreditation model relies on expert opinion fo determine the quality of programs. The
purpose is to provide professional judgments of quality.
Utilization- Focused Evaluation, “What are the information needs of stakeholders, and how
will they se the findings?”- Evaluation done for and with specific, intended primary users
for specific, intended uses. ” Assumes stakeholders will have a high degree of involvement
in many, if not all, phases of tit evaluation. The major question being addressed is,
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page&© Participatory/Collaborative Evaluation.-“What are the information needs of those
closest to the program?”- Engaging stakeholders in the evaluation process, so they may
better understand evaluation and the program being evaluated and ultimately use the
evaluation findings for decision-making purposes.
© Empowerment Evaluation. “What are the information needs to foster improvement and
self-determination?”. Use of evaluation concepts, techniques, and findings to foster
improvement and self-determination, a catalyst for learning in the workplace a social
activity in which evaluation issues arc constructed by and acted on by organization
members
© Organizational Learning. “What are the information and learning needs of individuals,
teams, and the organization in general?” ongoing and integrated into all work practices
© Theory-Driven Evaluation. -“Ilow is the program supposed to work? What are the
assumptions underlying the programs development and implementation?”- Focuses on
theoretical rather than methodological issues to use the “program’s rationale or theory as
the basis of an evaluation to understand the program’s development and impact” using a
plausible model of how the program is supposed to work.
+ Success Case Method. “What is really happening?” - focuses on the practicalities of
defining successful outcomes and success cases and uses some of the processes from
theory-driven evaluation to determine the linkages, which may take the form of a logic
model, an impact model, or a results map. Evaluators using this approach gather stories
within the organization to determine what is happening and what is being achieved. The
‘major question this approach asks is,
SESSION 7: THE EVALUATION PROCESS
Evaluation operates within multiple domains and serves a variety of functions at the same time.
Moreover it is subject to budget, time and data constraints that may force the
sacrifice many of the basic principles of impact evaluation design. Before entering into the
details of evaluation methods itis important for the reader to have a clear picture of the way an
evaluation procedure works.
@ The M&E Plan/strategy
‘A comprehensive planning document for all monitoring and evaluation activities within
a program, This plan documents the key M&E questions 10 be addressed: what indicators
will be collected, how, how often, from where, and why; baseline values, targets, and
assumptions; how data are going to be analyzed/interpreted; and how/how often report
will be developed and distributed.
‘Typically, the components of an M&E plan are:
Establishing goals and objectives
Setting the specific M&E questions
Determining the activities to be implemented
The methods and designs to be used for monitoring and evaluation
‘The data to be collected
The specific tools for data collection
‘The required resources
The responsible parties to implement specific components of the plan
The expected results
The proposed timeline
6) Monitoring And Evaluation Cycle
Step 1 — Identify the purpose and scope of the M&E system
‘+ Formulating objectives
* Selecting Indicators
‘Setting baselines and targets
Step 2 — Plan for data collection and management
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 9data, qualitative studies- PRA, mapping, KIls, FGDs, observation, checklis
secondary data, primary data-sample surve
. project output
ternal
ments, participatory assessments
planning for data collection - prepare data collection guidelines, pre-test data
collection tools, train data collectors, address ethical issues
Step 3 ~ Plan for data analysis
Step 4 ~ Plan for information reporting and utilization
Step
Plan for M&E human resources and capacity building
Step 6 — Prepare the M&E budget
Gi Setting up an M&E system often involves the following aspects
a)
b)
°
4d
D
a
)
i)
D
®
Assess the existing readiness and capacity for monitoring and evaluation
Review current capacity within (or outsourced without) the organization and its,
partners which will be responsible for project implementation, covering: technical
skills, managerial skills, existence and quality of data systems, available technology
and existing budgetary provision.
Establish the purpose and scope
Why is M&E needed and how comprehensive should the system be?
What should be the scope, rigour and should the M&E process be participatory?
Identify and agree with main stakeholders the outcomes and development
objective(s).
Set a development goal and the project purpose or expected outcomes, outputs,
activities and inputs. Indicators, baselines and targets are similarly derived
Select key indicators i the qualitative or quantitative variables that measure project,
performance and achievements for all levels of project logic with respect fo inputs,
activities, outputs, outcomes and impact, as well as the wider environment,
requiring pragmatic judgment in the careful selection of indicators.
Developing and Evaluation Frame work - set out the methods, approaches and
evaluation designs ( Experimental, Quasi-Experimental and Non-Experimental) to
be used to address the question of whether change observed through monitoring
indicators can be attributed to the project interventions.
Set baselines and planning for results ~The baseline is the first measurement of an
indicator, which sets the pre-project condition against which change can be tracked
and evaluated.
Select data collection methods as applicable.
Setting targets and developing a results framework- A target is a specification of the
quantity, quality, timing and location to be realized for a key indicator by a given
date, Starting from the baseline level for an indicator the desired improvement is,
defined taking account of planned resource provision and activities, fo arrive at a
performance target for that indicator.
Plan monitoring, data analysis, communication, and reporting: Monitoring and
Evaluation Plan
Implementation monitoring tracking the inputs, activities and outputs in annual or
multiyear work plans, and ‘Results monitoring’ tracking achievement of outcomes
and impact, are both needed. The demands for information at cach level of
‘management need to be established, responsibilities allocated, and plans made for:
i, what data to be collected and when;
ii, how data are collected and analyzed;
iii, who collects and analyses data;
iv, who reports information,
vy. when?
Facilitating the necessary conditions and capacities to sustain the M&E System -
organizational structure for M&E, partner's responsibilities and information
requirements, staffing levels and types, responsibilities and internal linkages,
incentives and training needs, relationships with parmers and stakeholders,
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 10horizontal and vertical lines of communication and authority, physical resource
needs and budget.
SESSION 8: EVALUATION DESIGN
Devel
sping an evaluation design includes:
‘© Determining what type of design is required to answer the questions posed
Sampling
Selecting a methodological approach and data collection instruments
Selecting a comparison group
Determining timing, sequencing, and frequency of data collection
Evaluation rescarch may adopt two general methodological approaches—either a quantitative, a
qualitative or mixed-methods design approach. Quantitative designs normally take the form of
experimental designs. Qualitative evaluation approaches are non-experimental approaches
which answer ‘why’ and ‘how’ questions.
‘The following are brief descriptions of the most commonly used evaluation (and research)
designs.
‘One-Shot Design
Tn using this design, the evaluator gathers data following an intervention
or program. For example, a survey of participants might be administered
after they complete a workshop.
Retrospective Pre
test.
‘As with the one-shot design, the evaluator collects data at one fime but
asks for recall of behaviour or conditions prior to, as well as after, the
intervention or program.
‘One- Group Pre
fest-Fost-test
Design,
‘The evaluator gathers data prior fo and following the intervention or
program being evaluated.
Time Series Design
‘The evaluator gathers data prior fo, during, and after the implementation
of an intervention or program
Frees Post-Test
Controf-Group
Design
ie evaluator gathers data on two separate groups prior 10 and
following an intervention or program, One group, typically called the
experimental or treatment group, receives the intervention. The other
zroup, called the control group, does not receive the intervention,
Tost-fest-Only
Control-Group
Design.
‘The evaluator collects data from two separate groups following an
intervention or program, One group, typically called the experimental
or treatment group, receives the intervention or program, while the
other group, typically called the control group, does not receive the
intervention,
Data arc collected from both of these groups only after the intervention.
Case Sad Dig
When evaluations are conducted for the purpose of understanding the
program’s context, participants’ perspectives, the inner dynamics of
sifuations, and questions related fo participants’ experiences, and where
generalization is not a goal, a case study design, with an emphasis on the
collection of qualitative data, might be most appropriate, Case studies
involve in-depth descriptive data collection and analysis of individuals,
groups, systems, processes, or organizations. In particular, the case study
design is most useful when you want to answer how and why questions
and when there is a need to understand the particulars, uniqueness, and
diversity of the case.
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 11Decisions for Designing an Evaluation Study
£
¥
‘assignment
SESSION 9: METHODS OF EVALUATION AND TOOLS
) Evaluation Methods
Informal and less-structured methods
Conversation with concerned individuals
‘Community interviews
Field visits
Reviews of records
Key informant interviews
Participant observation
Focus group interviews
Formal and more-structured methods
Direct observation
Questionnaires
One-time survey
Panel survey
Census
Field experiments
‘valuation
method/
/Teol
Description
Remarks
Case stad, A detailed —descripion of
‘communities, organizations,
programmes, time pericds or a story
Tndivacials,
Useful in evaluating complex atuations and
exploring qualitative impact. Helps to
illustrate findings and includes comparisons
(commonalities); only when combined
riangulated) with other case studies or
‘methods can one draw conclusions about key
principles.
Checkiat ‘Ast of Hemme used Tor validating OF
inspecting whether procedires/steps have
been followed,
‘Allow for systematic Foview That can Be
‘useful in setting benchmark standards and
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk
Page 12valuation Deseription Remarks
smethod/Approac
h/Tool
‘or The presence of examined behaviours, | evablishing periodic measures oF
improvement.
Conammunlly Book] A community-maintained document ofa | Where communities Tave Tow Wieracy rates,
project belonging to a community. can | a memory team is identified whose
Include written records, pictures, drawings, | responsibilty it isto relate the written record
songs or whatever community members fect | fo the rest ofthe community in keeping with
is appropriate. ‘thei oval traditions.
Conmunity A form of public meeting open all Tnferaction is between the participants and
interviews/meeti | community members, the interviewer, who presides over the
ng. imceting and asks questions following a
prepared interview guide
Direct ‘Krocond of what observers sc and hear ata | An observation guide s offen ased To eB
observation _| specified site, using a detailed observation | look for consistent criteria, behaviours, oF
form. Observation may be of pascal patterns.
surroundings atiities or processes
‘Observation s
1 good technique for collecting data on
inchavioural pattems and physical conditions
Document ‘A review of documents Gecondary data) can —| T includes writen documentation CH
review provide cost-effective and timely project records and reports, administrative
baseline information and a historical databases, raining materials,
perspective ofthe project/progranme. correspondence, legislation and policy
documents) as Well as videos, electronic data
or phos,
Foou group | Focused dcusion wih a avail group "A mioderatoF miFOdICES Ths Tops ana WSS
discussion. (usually eight to 12 people) of participants | prepared interview guide to lead the
to record attitudes, pereeptions and beliefs | discussion and extract conversation,
elvan! to the isses being examined ‘opinions and reactions
Taierviews ‘An open-ended Gema-structared) interview —| Replies car easily be numncrically COE TOF
isa technique for questioning thu allows the | statistical analysis
interviewer fo probe and pursue topics of
interest in depts (rather than just yes/no”
questions). A elosedended(
interview systematically follows carefully
‘organized questions (prepared in advance in
fan interviewer's guide) that only allow a
limited range of answers, such as “yes/no” or
expressed by
‘a rating/number on a scale
Key Informant —] An interview with a person having special —| The tnferviows are generally conducted
interview. information about a particular top aan open-ended or semi-siructured fashion,
Laboratory Precise measurement of specific objective
testing. phenomenon, e. infant Weight oF water
walt fest
“Miini-sarvey Data collared from interviews wilh 25 50] Structured questionnaires wilt Tite
individuals, usually selected using rmumber of elosed-endedd questions are used
nonprobabilty to generate quantitative data that can
sampling techniques. collected and analysed quick!
‘Most significant —[& participatory monitarmg technique Based | They aive a rich picture of the impact oF
change (MSC). | on stories about important or significant | development work and provide the bass for
changes, rather than indicators. dialogue over key objectives andl the value of
development programmes
Participant “Rochnigus fre used By anthvopoTogTats This metho gathers sights ha aT
observation. _| (hose who study kumankind); it requires the | otherwise be overlooked, but i time
rescarcher to spend considerable time (days) | consuming,
with the group being studied and to interact
‘with them asa partieipant in their
community.
Faricipatory | This uses community engagement Techniques 1s usually done quickly and atensively =
rapid Gorrural) | to understand community views on & to three-week period. Methods
appraisal (RA). | particular issue. erviews, focus groups and
‘community mapping, Tools include
stakeholder analysis, participatory rural
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk
Page 13,Braluation Description Remarks
smethod/Approac
h/Tool
‘appraisal, Benciicany assent, and
participatory monitoring and evaluation
‘Questionnaire [A dlata collection mairament contaming ase | Typically used in a survey
fof questions organized in a systematic way, as
‘well asa set of instructions forthe data
collector/interviewer about how to ask the
questions
apid appraisal —[ A quick, cos-e7octive technique To gather | This ochnigue shares many ofthe
(orassessment), | data sysiemalicaly for decision-making, | characterislics of participatory appraisal
using quantitative and qualitative methods, — | (such as triangulation and multiseiplinary
such as site visits, observations and sample’ | team) and recognizes that indigenous
surveys lnowledge isa critical consideration for
decision-making. Methods include: key
informant interview, focus group discussion,
comtumunity group interview, direct
‘bsevation, and mini-surve
‘Siaisical dala A review of population consine, research
review. A studies and other sources of statistical data
Story ‘An account or recital ofan event or a series] A Tanai alary focuses on te Teor
of events. A success sfory illustrates impact by | learned through an individual's postive and
detailing an individuals positive experiences. | negative experiences Gf any) with @
in his oF her own words, project/programme.
Formal Survey | Systematic collection of information froma — [Includes mult-topic oF single Tope house
dlefined population, ususlly by means of | htold/living standards survey, client
interviews or questionnaires administered to. | satisfaction surveys, core welfare indicators
sample of units inthe population (eg questionnaire
person, beneficiaries and adulls). An Public expenditure tracking surveys-
en survey sone in which the ‘racking flow of public funds and the extent
survey is administered by someone trained (s | to which resources actually reach the target
data collector/enumerator) to record sours.
reaponses from respondents. A self- Sampling-related methods- sample frame,
‘ddministered surveys a written survey sample size, sample method e. random
completed by the respondent, either in a | simple (and systematic) or stratified
§group setting or ma separate location NNon-random- purposive (and cluster) and
Respondents must be iterate quota sampling, ete
Visual Farlcipants develop maps, agra "This fochnique is espocally eTective where
techniques. calendars, timslines nd other visual displays | verbal methods can he problematic det
{oexamine the study fopies Participants can | low-literae or mixed: language target
‘be prompted to construct visual responses to | populations, or in situations where the
questions posed by the interviewers, e.g, by | desired information is not easly expressed in
constructing a map oftheir local area either words oF numbers,
Coat Benefit and] Assesses whether or not the costs of ar Cost enfit- measures both pals and
Cost Effectiveness | acivity can be justified by the outcomes and | outputs in monetary terms
Analysis impacts (Cost ffectiveness- mputs in monetary and
outputs in non-monetary terms
“MBE Tool/Method | Advantages Disadvantages
‘Survey Good for gathering descriptive ___| Self-report may Tead fo biased
data reporting,
Can cover a wide range of topics | Data may provide a general
Are relatively inexpensive to use _| picture but lack depth
Can be analyzed using a variety of
existing software
‘May not provide adequate
information on context
Provide a rich picture of what is,
happening, as seen through the
eyes of many individuals
Allow a thorough exploration of
interactions between treatment
and contextual factors
Can help explain changes or
facilitating factors that might
Require a sophisticated and well-
trained data collection and
reporting team
Can be costly in terms of the
demands on time and resources
Individual cases may be over
interpreted or overgeneralized
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk
Page 14otherwise not emerge from the
data
Interviews
Usually yield richest data, details,
Expensive and lime consuming
new insights Need well-qualified, highly
Permit face-to-face contact with | trained interviewers
respondents Interviewee may distort
Provide opportunity to explore
topics in depth
Allow interviewer to experience
the affective as well as
cognitive aspects of responses
Allow interviewer to explain or
help clarify questions,
information through recall error,
selective perceptions, desire to
please interviewer
Flexibility can result in
inconsistencies interviews
‘Volume of information very large;
may be difficult to
increasing the likelihood of useful | transcribe and reduce data
responses
Allow interviewer fo be flexible in
administering interview to
particular individuals or in
particular circumstances
PARTICIPATORY M&E
Participatory evaluation is a partnership approach to evaluation in which stakeholders actively
engage in developing the evaluation and all phases of its implementation, Participatory
evaluations often use rapid appraisal techniques. Name a few of them.
+ Key Informant Interviews - Interviews with a small number of individuals who are most
knowledgeable about an issue,
+ Focus Groups - A small group (8-12) is asked to openly discuss ideas, issues and
experiences,
+ Mini-surveys - A small number of people (25-50) is asked a limited number of
questions.
+ Neighbourhood Mapping - Pictures show location and types of changes in an area to be
evaluated.
+ Flow Diagrams - A visual diagram shows proposed and completed changes in systems.
+ Photographs - Photos capture changes in communities that have occurred over time.
+ Oral Histories and Stories - Stories capture progress by focusing on one person’s or
organization’s account of change.
E.g. Specific applications of the focus group method in evaluations.
Identifying and defining problems in project implementation
Pretesting topics or idea
Identifying project strengths, weaknesses, and recommendations
Assisting with interpretation of quantitative findings
Obtaining perceptions of project outcomes and impacts
Generating new ideas
SESSION 10: DATA ANALYSIS AND REPORTING
‘The term “data” refers to raw, unprocessed information while “information,” or “strategic
information,” usually refers to processed data or data presented in some sort of context
‘© Data ~primary or secondary-is a term given to raw facts or figures before they have been,
processed and analysed
‘+ Information refers to data that has been processed and analysed for reporting and use.
‘© Data analysisis the process of converting collected (raw) data into usable information.
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 15@ Quantitative and Qualitative data
© Quantitative data measures and explains what is being studied with numbers (c.g. counts,
ratios, percentages, proportions, average
© Qualitative data explains what is being studied with words (documented observations,
representative case descriptions, perceptions, opinions of value, etc).
© Quantitative methods tend to use structured approaches (e.g. coded responses to surveys)
which provide precise data that can be statistically analysed and replicated (copied) for
comparison,
© Qualitative methods use semi-structured techniques (¢.g. observations and interviews) to
provide in-depth understanding of attitudes, beliefs, motives and behaviours. They tend to
be more participatory and reflective in practice.
Quantitative data is often considered more objective and less biased than qualitative data but
recent debates have concluded that both quantitative and qualitative methods have subjective
(biased) and objective (unbiased) characteristics.
‘Therefore, @ mixed-methods approach is often recommended that can utilize the advantages of
both, measuring what happened with quantitative data and examining how and why it happened
with qualitative data.
(i) Some Data Quality Issues in Monitoring and Evaluation
Coverage: Will the data cover all of the elements of interest?
Completeness:1s there a complete set of data for each element of interest?
Accuracy: ave the instruments been tested to ensure validity and reliability of the data?
Frequency: Are the data collected as frequently as needed?
Reporting schedule: Do the available data reflect the periods of interest?
+ Accessibility: Are the data needed collectable/retrievable?
+ Fower:Is the sample size big enough to provide a stable estimate or detect change?
(ii) Data Analysis
Quantitative or qualitative research methods or a complementary combination of both
approaches are used.
Analysis may include:
© Content or textual analysis, making inferences by objectively and systematically
idenlifying specified characteristics of messages.
© Statistical descriptive techniques, the most common include: graphical description
(histograms, scatter-grams, bar chart,...); tabular description (frequency distribution,
cross tabs,...); parametric description (mean, median, mode, standard deviation, skew-
ness, kurtosis, ..)
‘© Statistical inferential techniques which involve generalizing from a sample to the whole
population and testing hypothesis. Hypothesis are stated in mathematical or statistical
ferms and tested through two or one-tailed fests (t-test, chi-square, Pearson correlation, F-
statistic, ...)
SESSION 11: TERMS OR REFERENCE IN M&E AND EVLUATION REPORT TEMPLATE.
(Terms of Reference in Evaluation
Evaluation organizers are usually the ones who are in charge of a particular project and want to
have the project evaluated to better manage project operation. Responsibility in the evaluation
organizers differs from those of evaluators, who are usually consultants contracted for the
evaluation,
‘Tasks of the evaluation organizers include:
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 16Preparing the TOR; TOR is a written document presenting the purpose and scope of the
evaluation, the methods to be used, the standard against which performance is to be
assessed or analyses are to be conducted, the resources and time allocated, and reporting
requirements. TOR also defines the expertise and tasks required of a contractor as an
evaluator, and serves as job descriptions for the evaluator.
Appointing evaluator(s);
Securing budget for evaluation;
‘monitoring the evaluation work;
Providing comments on the draft;
publicizing the evaluation report, and.
Providing feedback from the results to concerned parties.
‘The role of evaluator includes:
Preparing the detailed evaluation design;
Collecting and analyzing information, and
Preparing an evaluation report.
‘The role of Management includes:
‘Management response
Action on recommendations
Tracking status of implementation of recommendations
Response Template
Prepared by:
Reviewed by:
1
‘valuation recommendation 7.
1
‘Management response!
ey action(9) “Time Fume | Responsible Tracking”
Tr
iz
valuation recommendation 2
‘Manageniont- respons!
Rey action) “Tine Frame] Resporsbie Tracking™
sunt) Comments
Comments Saha
20
22
iD
‘The is no single universal format for M&E but the template is intended to serve as a guide for preparing
Evaluation Report Template
meaningful, useful and credible evaluation reports that meet quality standards. It only suggests the content
that should be included in a quality evaluation report but does not purport to prescribe a
finitive s
by-section format that all evaluation reports should follow.
Eg.
Formal reports developed by evaluators typically include six major sections:
(1) Background
(2) Evaluation study questions
(8) Evaluation procedures
(4) Data analyses,
(8) Findings
(6) Conclusions (and recommendations)
‘A. Abstract
B. Executive summary
Il Background
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk
section
Page 17A. Problems or needs addressed
B. Literature review
C. Stakeholders and their information needs
D. Participants
E, Project’s objectives
F. Activities and components
G. Location and planned longevity of the project
H. Resources used to implement the project
1. Project’s expected measurable outcomes
J. Constraints
IH Evaiuation study questions
‘A. Questions addressed by the study
B. Questions that could not be addressed by the study (when relevant)
IV. Evaluation procedures
A. Sample
1, Selection procedures
2, Representativeness of the sample
8, Use of comparison or control groups, if applicable
B. Data collection
1. Methods
2. Instruments
C. Summary matrix
1. Evaluation questions
Variables
Data gathering approaches
Respondents
Data collection schedule
V. Findings
A. Results of the analyses organized by study question
VI. Conclusions
A, Broad-based, summative statements
B. Recommendations, when applicable
Or
Table of contents
Executive summary
© Introduction
‘+ Bvaluation scope, focus and approach
© Project facts
* Findings, Lessons Learned
‘0 Findings
© Lessons learned
‘+ Conclusions and recommendations
© Conclusions
© Recommendations
© Annexes/appendices
Or as per organizational requirements (Modified from UNDP, 2009, Handbook on Planning,
Monitoring and Evaluating for Development Results)
“The report should also include the following:
1, Title and opening pages—Should prov
‘© Name of the evaluation intervention
‘+ Time frame of the evaluation and date of the report
lowing basic information:
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 18Country/Organization/Entity of the evaluation intervention
‘Names and organizations of evaluators
Name of the organization commissioning the evaluation
Acknowledgements
2. Table of contents
‘Should always include lists of boxes, figures, tables and annexes with page references.
3. List of acronyms and abbreviations
Executive summary
A stand-alone section of two to three pages that should:
Briefly describe the intervention (the project(s), programme(s), policies or other
interventions) that was evaluated.
Explain the purpose and objectives of the evaluation, including the audience for the
evaluation and the intended uses.
Describe key aspect of the evaluation approach and methods,
Summarize principle findings, conclusions, and recommendations,
5. Introduction
Should:
Explain why the evaluation was conducted (the purpose), wiy the intervention is being
evaluated at this point in time, and why it addressed the questions it did,
Identify the primary audience or users of the evaluation, what they wanted to learn from
the evaluation and why, and how they are expected fo use the evaluation results,
Identify the intervention (the project(s) programme(s), policies or other interventions) that,
‘was evaluated—see upcoming section on intervention
Acquaint the reader with the structure and contents of the report and how the information
contained in the report will meet the purposes of the evaluation and satisfy the information
needs of the reports intended users.
6. Description of the intervention/project/process/programme—Provide the basis for report users
to understand the logic and assess the merits of the evaluation methodology and understand the
applicability of the evaluation result
‘The description needs to provide sufficient detail for the
roport user to derive meaning from the evaluation. The description should:
7. Bratuation s
Describe what is being evaluated, wo seeks to benefit, and the problem or issue it secks
toaddress
Explain the expected results map or results framework, implementation strategies, and the
Key assumptions underlying the strategy.
Link the intervention to national priorities, Development partner priorities, corporate
strategic plan goals, or other project, programme, organizational, or country specific plans
and god
IMentify the phase in the implementation of the intervention and any significant changes
(cg,, plans strategies, logical frameworks) that have occurred over ime, and explain the
implications of those changes for the evaluation.
Identity and describe the key partners involved in the implementation and their oles.
Describe the scale of the intervention, such as the number of components (e.g, phases of
project) and the sizeof the target population for each component.
Indicate the total resources, including, human resources and budgets,
Describe the context of the social, political, economic and institutional factors, and the
geographical landscape within which the intervention operates and explain the effects
‘challenges and opportunities) those factors present for its implementation and outcomes.
Point out design weaknesses (¢g,, intervention logic) or other implementation constraints
(eg, resource limitations)
ape und objectives - The report should provide a clear explanation of the evaluation’s
scope, primary objectives and main questions.
Evaluation scope—The report should define the parameters of the evaluation, for example,
the time period, the segments of the target population inchided, the geographic area
included, and which components, outputs or outcomes were and were not assessed.
valuation objectives—The report should spell out the types of decisions evaluation users
will make, the issues they will need fo consider in making those decisions, and what the
evaluation will need to achieve to contribute to those decisions.
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 19‘© Evaluation criteria—he report should define the evaluation criteria or performance
standards used. The report should explain the rationale for selecting the particular criteria
used in the evaluation,
‘+ Evaluation questions—Evaluation questions define the information thal the evaluation will
generate. The report should detail the main evaluation questions addressed by the
evaluation and explain how the answers fo these questions address the information needs
8. Byuluation approach and methods - The evaluation report should describe in detail the selected
methodological approaches, theoretical models, methods and analysis; the rationale for their
selection; and how, within the constraints of time and money, the approaches and methods
employed yielded data that helped answer the evaluation questions and achieved the evaluation
purposes. The description should help the report users judge the merits of the methods used in the
evaluation and the credibility of the findings, conclusions and recommendations.
The description on methodology should include discussion of each of the following:
‘© Data sources—The sources of information (documents reviewed and stakeholders), the
rationale for their selection and how the information obtained addressed the evaluation
questions.
‘© Sample and sampling frame—If a sample was used: the sample size and characteristics; the
sample selection criteria (eg., single women, under 45); the process for selecting the
sample (e.g., random, purposive); if applicable, how comparison and treatment groups
were assigned; and the extent to which the sample is representative of the entire larget
population, including discussion of the limitations of the sample for generalizing results
‘© Data collection procedures and instruments—Methods or procedures used to collect data,
including discussion of data collection instruments (¢.g., interview protocol), their
appropriateness for the data source and evidence of their reliability and validity.
‘© Performance standards/indicators—The standard or measure thal will be used to evaluate
performance relative to the evaluation questions (c.g., national or regional indicators,
rating scales).
‘© Stakeholder engagement—Stakeholders’ engagement in the evaluation and how the level of
involvement contributed to the credibility of the evaluation and the resulls.
+ Ethical considerations—The measures taken to protect the rights and confidentiality of
informants
‘© Background information on evaluators—The composition of the evaluation team, the
background and skills of team members and the appropriateness of the technical skill mix,
gender balance and geographical representation for the evaluation,
‘© Major limitations of the methodology—Major limitations of the methodology should be
identified and openly discussed as to their implications for evaluation, as well as steps taken,
to mitigate those limitations.
9. Data analysis—The report should describe the procedures used to analyse the data collected to
answer the evaluation questions. It should detail the various steps and stages of analysis that were
carried out, including the steps to confirm the accuracy of data and the results. The report also
should discuss the appropriateness of the analysis to the evaluation questions. Potential weaknesses
in the data analysis and gaps or limitations of the data should be discussed, including their possible
influence on the way findings may be interpreted and conclusions drawn.
10, Findings and conclusions—Uhe report should present the evaluation findings based on the analysis
and conclusions drawn from the findings,
‘© Findings—Should be presented as statements of fact that are based on analysis of the data.
‘They should be structured around the evaluation criteria and questions so that report users
can readily make the connection between what was asked and what was found. Variances
between planned and actual results should be explained, as well as factors affecting the
achievement of intended results, Assumptions or risks in the project or programme design
that subsequently affected implementation should be discussed.
© Conchisions—Should be comprehensive and balanced, and highlight the strengths,
weaknesses and outcomes of the intervention. They should be well substantiated by the
evidence and logically connected to evaluation findings. They should respond to key
evaluation questions and provide insights into the identification of and/or solutions to
important problems or issues pertinent to the decision making of intended users.
11. Recommendations—The report should provide practical, feasible recommendations directed to the
intended users of the report about what actions to take or decisions to make. The recommendations
should be specifically supported by the evidence and linked to the findings and conclusions around
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 20key questions addressed by the evaluation, They should address sustainability of the initiative and
comment on the adequacy of the project exit strategy, if applicable.
12. Lessons learned—As appropriate, the report should include discussion of lessons learned from the
evaluation, that is, new knowledge gained from the particular circumstancs
outcomes, even about evaluation methods) that are applicable fo a similar context. Lessons should
be concise and based on specific evidence presented in the report.
13, Report annexes—Suggested annexes should include the following to provide the report user with
supplemental background and methodological details that enhance the credibility of the report:
‘© TOR for the evaluation
‘* Additional methodology-related documentation, such as the evaluation matrix and data
collection instruments (questionnaires, interview guides, observation protocols, etc.) as
appropriate
List of individuals or groups interviewed or consulted and sites visited
List of supporting documents reviewed
Project or programme results map or results framework
‘Summary tables of findings, such as tables displaying progress towards outputs, targets,
and goals relative to established indicators
Short biographies of the evaluators and justification of team composition
Code of conduct signed by evaluators
‘SESSION 12: BEST PRACTICES, EMERGING TRENDS & M&E CAPACITY BUILDING IN KENYA
©
wo
‘Monitoring Best Practices
* Data well-focused to specific audiences and uses (only what is necessary and sufficient).
‘+ Systematic, based upon predetermined indicators and assumptions.
© Also look for unanticipated changes with the project/programme and its context,
including any changes in project/programme assumptions/risks; this information should
be used to adjust project/ programme implementation plans.
* Be timely, so information can be readily used to inform project/programme
implementation.
* Be participatory, involving key stakeholders -reduce costs, build understanding and
ownership.
‘+ Not only for project/programme management but should be shared when possible with
beneficiaries, donors and any other relevant stakeholders.
Good MAE Principles for Projects
+ Participation: encourage participation “by all who wish to participate and/or who might
be affected by the review.”
+ Decision Making: “Projects will utilize a structured decision-making process.”
+ Value People: “Projects are not intended to result in a loss of employees but may result in
employees being re-deployed to other activities within the department.”
‘Measurement: for accountability; measures should be accurate, consistent, flexible,
comprehensive but not onerous
+ Integrated Program/Process Planning and Evaluation: incorporated into yearly business
plans
+ Ethical Conduct/Openness: consider ethical implications, respect and protect rights of
participants
Program/Process Focus: focus on improving program, activity or process
Clear and Accurate Reporting of Facts and Review Results
‘Timely Communication of Information and Review Results to Affected Parties
‘Multi-Disciplinary Team Approach: include a range of knowledge and experience; seek
assistance from outside of the team as required
+ Customer and Stakeholder Involvement: “External and internal customers and
stakeholders related to a project should be identified and consulted, if possible, throughout
the project.”
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 21Gid Basic Ethics to expect from an evaluator
‘Systematic Inquiry — Evaluators conduct systematic, data-based inquiries about whatever
is being evaluated,
+ Competence — Evaluators provide competent performance to stakeholders.
‘+ Infegrity/honesty — Evaluators ensure the honesty and integrity of the entire evaluation
process.
‘+ Respect for people — Evaluators respect the security, personal dignity and autonomy of
individuals, and self-worth of the respondents including recognition and special
protections for those with diminished autonomy, such as children or prisoners, program
participants, clients, and other stakeholders with whom they interact.
‘+ Responsibilities for general and public welfare — Evaluators clarify and take into account
the diversity of interests and values that may be related fo the general and public welfare.
* Beneficence: the obligation fo protect people from harm by maximizing anticipated
benefits and minimizing potential risks of harm
+ Justice: benefits and burdens of research should be distributed fairly. In other words, one
segment of society—the poor or people of one ethnicity—should not be the only subjects
in research designed to benefit everyone
Gv) Key Success Factors of Monitoring and Evaluation System
‘+ Clear linkage with the strategic objectives
‘+ Clear statements of measurable objectives for the project and its components.
+ Astructured set of indicators covering: inputs, process, outputs, outcomes, impact, and
exogenous factors,
* Data collection mechanisms capable of monitoring progress over time, including
baselines and a means to compare progress and achievements against targets.
‘© Availability of baselines and realistic results framework
* Clear mechanisms for reporting and use of M&E results in decision-making.
‘+ Sustainable organizational arrangements for data collection, management, analysis,
and reporting.
‘+ A good evaluation process should have six characteristics:
(0. stakeholder involvement,
©. impartiality, usefulness
© technical adequacy,
© cost effectiveness and
© timely dissemination and feedback
(9 Factors coniibuing to fale of ME Syetema
Poor system design in terms of collecting more data than is needed or can he processed.
Inadequate staffing of M&E both in terms of quantity and quality
Missing or delayed baseline studies. Strictly these should be done before the start of project
implementation, if they are to facilitate with and without project comparisons and
evaluation
, often as a result of inadequate processing facilities and staff
* Personal computers can process data easily and quickly but to make the most of these
capabilities requires the correct software and capable staff.
+ Inadequate utilization of results
( Status of M&E in Kenya
‘+ Establishment of a National Monitoring and Evaluation Policy
‘© Monitoring and evaluation defined as “a management tool that ensures that policy,
programme, and project results are achieved by gauging performance against plans; and
drawing lessons from experience of interventions for future implementation effectiveness
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 22while fostering accountability to the people of Kenya’. (GOK, Monitoring and evaluation
policy in Kenya, 2012)
+ Directorate of M&E created in 2003
+ National Integrated M&E system- implementation coordinated by Directorate of M&E,
Department of Planning to monitor implementation of the Economic Recovery Strategy
+ Rationale for M&E policy-Constit
10, 36, 174, 185, 201, 203 and
Challenges include: ~
i, Weak MRE culture- hard to determine with M&E influences decision-making,
and M&E budgets not aligned to projects/ programmes
ii, Weak M&E reporting structures and multiple and uncoordinated M&E systems
within and among institutions-hard to get full and harmonized results-based
information,
iii, Weak institutional, managerial and technical capacities- evaluations not
adequately conducted
iy. Untimely, rarely analysed data and low utilization of data/ information
vy. lack of M&E policy and legal framework
‘* Capacity development to complement policy
© Technical and managerial capacity — Equip officers with M&E skills and do
backstopping on M&E: for state and non-state actors
©. Standardize M&E activities
© MED in collaboration with local training institutions shall develop curriculum to
guide delivery of certificate, diploma, graduate, masters and post-graduate
diploma courses
© MED to spearhead real time reporting through uploading, downloading and data
analysis on ICT database platforms
© Institutional capacity
+ Units charged with M&E
+ Necessary enabling infrastructure at national and devolved levels
Technical oversight committce
National steering committee
Ministerial M&E committees
County M&E committees
‘+ National and County Stakeholders fora
= Funds designated for M7E activities
* Non-slate actors (NGOs, civil society and private sector) be supported by
MED in their M&E capacity development
ion of Kenya provides basis for M&E under articles
5, 226 and 227
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 23EXERCISES
Exercise 1: Identify 5 key indicators and complete an indicator matrix for project/programme
u are familiar with.
Indicator | Indicator | Methods/Soure | Person/s | Frequency/ | Data Informatio
Definition | es Responsible | Schedules | Analysis _| n Use
Identify a suitable project and complete a logical framework
=
es
Exercise 8: Identify a suitable project and complete an Evaluation Grid using the five evaluation
criteria, which are Relevance, Effectiveness, Efficiency, Impact and Sustainability
Rotevance | Etecivensas | —eiciency—] Impact] Susanna
‘Sunpute
Exercise 4: Identify a suitable project and complete an Evaluation Matrix using the five
criteria, which are Relevance, Effectiveness, Efficiency, Impact and Sustainability
evaluation
Saar Ty Spa Daz aE CATES TRIGA — Wade
ealasion | Questions | Sub Souves | Mathods/Toais | Succes ‘orate
nie (Quatons Standard Asie
‘Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk
Page 24Identify 5 evaluation methods/techniques and complete an Evaluation
‘Method/Technique Matrix in regard fo a suitable project/programme.
Evaluation ‘What are they | What can itbe | Advantages: ‘Disadvantages | Resources
‘Method/Tectniqa used for required
Tormal surveys) Used io collect | Baseline data, Tidings fom | Data-analvas | Finances,
sandardized | comparing sampled items | Process and ‘Technical
information | different groups, | canbe applied to | anaiysiscanbea | and
from samples | changes ‘wider target hoitleneck analytical
overtime, group, skills
Tapa appraisal
methods
Participatory
methods
Exercise 6: Identify 5 evaluation models/approaches and complete an Evaluation
Model/ Approaches Matrix
Fraluation ‘What are some examples | What conditions need fo | What are some
Model/Approsch or situations in which | exist to use this limitations of tis
you would use this ‘approach? Approach?
approach?
Goalies caluation
‘Kirkpatrick Four-le
approach
Exercise 7: Evaluation Models
4) Applying Kirkpatrick Four-Level Approach to Evaluate Training
A Sales training covers basic topics, such as how 0 begin the sales discussion, how to ask the right
questions, and how to ask for the sale. Although the trainer believes that the training will be successful,
you have been requested to evaluate the training program, You decide to use the Kirkpatrick four-level
“approach
‘What aspects of the What are some of the | What are some of the Innitations of the evaluation and
training will you vvariables you will focus | its findings?
evaluate? on?
)_Applying CIFP evaluation model(Context, Input, Process, Product)
‘What aspects of the ‘What are some of he | What are some ofthe Inailations of the evaluation and
project will you evaluste? | variables you will focus | its findings?
on?
Exercise 8: Identify 5 evaluation designs and complete an Evaluation Design Matrix
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 25valuation Design
‘When would you wee this | What data collection
design? ‘methods might you use?
‘What are some
Limitations of this design? |
Reirospect Fre Tet
Case study Desig
Stetetettetettts:
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk
Page 26