[go: up one dir, main page]

0% found this document useful (0 votes)
1K views10 pages

NCM 113 Midterm Lec Mod 6

This document discusses monitoring and evaluating community health programs. It defines monitoring as ongoing assessment of progress to see if objectives are being met, while evaluation is a systematic review of outcomes and impact, often conducted by an outside team. Regular monitoring identifies problems early so corrections can be made. Evaluation assesses whether the right activities are being implemented and helps all stakeholders understand how the program is working. The document provides an example of evaluating a well-building program and discusses formative and summative evaluations, common barriers to evaluation, and the steps of program evaluation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views10 pages

NCM 113 Midterm Lec Mod 6

This document discusses monitoring and evaluating community health programs. It defines monitoring as ongoing assessment of progress to see if objectives are being met, while evaluation is a systematic review of outcomes and impact, often conducted by an outside team. Regular monitoring identifies problems early so corrections can be made. Evaluation assesses whether the right activities are being implemented and helps all stakeholders understand how the program is working. The document provides an example of evaluating a well-building program and discusses formative and summative evaluations, common barriers to evaluation, and the steps of program evaluation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

MODULE 6- MONITORING AND EVALUATING COMMUNITY HEALTH PROGRAMS

IMPLEMENTED
1. Designing and Implementing Evaluation Plan

Monitoring and evaluation (M&E) are the techniques use to find out how well our health programme is
achieving what it set out to do. We will originally have set objectives, i.e. the results we are aiming to
achieve and may have been recorded. M&E enables us to see how effectively we have reached those
objectives. The techniques of M&E are one way to measure success, but other measures of success may
be just as important.

Although M&E are bracketed together and are often confused, each has a specific meaning. 

Monitoring - refers to on-going assessment of a progress. It should be set up as part of our routine
programme management and is ideally done by both programme and community members together. It
uses the record systems we have built into the programme. 

Evaluating- refers to a systematic review of the programme outcomes and impact often at the end of a
cycle. It often involves an outside evaluation team.

One helpful way to distinguish between M and E is that monitoring asks the question ‘Are we doing
things right?’, and evaluation asks ‘Are we doing the right things?’. If monitoring is carried out well,
evaluation will be easier.

We often start with good ideas and ambitious objectives. As time goes on, these may get lost in day-to-
day activities or problems. M&E can highlight whether the programme is still on the right road, how far it
has travelled, and how far it still has to go. In this way M&E forms part of the planning cycle Regular
monitoring will also identify problems early so they can be corrected, and improvements can be
suggested.

Evaluation helps everyone to see what they are doing and where they are going.

The Planning Cycle showing the roles of M&E.


M&E helps the community to see how the programme is working, and shows the benefits it is bringing.
Community members will work with us in this process. We will also regularly feed M&E reports back to
the community as a means of promoting understanding of the whole process. Findings and results will
need to be presented in such a way that the community sees the benefits (and problems) and is motivated
to participate in improvements.

In practice, evaluations are often carried out because donors want confirmation that their money is being
well spent. But all stakeholders—programme, community, donors, and government—should benefit from
evaluation if it is well planned and carried out.

An evaluation showing good results can help our programme to become better known and a model for
other programmes. We can use Twitter, Facebook and other forms of social media to make findings
known to wider audiences. If it uses a rigorous methodology it can be published to share the learning and
raise the profile of the programme.

Governments may want to know what results the programme is achieving and whether it is reaching
district and national targets. If we are involved in specific programmes, e.g. End TB, Roll Back Malaria,
their co-ordinators will need our results. Civil society organizations involved in community-based health
care (CBHC) are often able to achieve more effective results at community level than government.
Evaluation (and the return of regular monitoring figures) should enable us to demonstrate this and
increase our credibility. In turn, this will enable CBHC as part of civil society to be entrusted with more
health tasks in national health programmes, which will be to everyone’s benefit.

EXAMPLE OF A PROGRAM- BUILDING WELLS

Activities: What is actually done


 ● Building wells.
 ● Hygiene education.

Evaluation: An assessment, at a specific time, of a programme’s outcomes and impact


 ● How water use in the village has changed.
 ● How the wells have influenced household hygiene and sanitation.

Monitoring: Continuous process to record, reflect and use information regarding progress
 ● Use of resources, activities completed, progress towards programme objectives.

Indicators: Evidence or signs that change has taken place

Quantitative indicators are those that can be measured or counted


 ● Number of people using the wells.

Qualitative indicators are those gained by observation


 ● Local people’s views about the wells.

Goals: Long-term aims for impact


 ● To improve health in target population.
Objectives: Results the programme is expected to achieve
 ● To increase the amount of clean water used in village households.

Inputs: Physical and human resources used within the programme


 ● Tools, bricks, labour.

Outputs: What is produced as a result of completed activities


 ● Functional village wells.

Impact: Long-term and sustainable change resulting from an activity


 ● Long-term improvements in the health of local people, social relationships in the village, and the
position of women.

Outcomes: The effect on the original situation due to the programme


 ● Increase in health through fewer households experiencing water- and hygiene-related illnesses.

Types of Evaluation

There are several different types of evaluations depending on what is being evaluated and the purpose of
the evaluation. All types of evaluation can be generally classified into the broad categories of formative
and summative evaluations. 

1. Formative evaluations- are used primarily to provide information for initiative improvement by


examining the delivery of the initiative, its implementation, procedures, personnel, etc. 

2. Summative evaluations- in contrast, examine the initiative's outcomes and are used to provide
information that will assist in making decisions regarding the initiative's adoption, continuation or
expansion and can assist in judgments of the initiative's overall merit based on certain criteria.

3. Comprehensive evaluations - combine both process and outcome questions

Evaluability Assessment

Before a proper evaluation can take place it is often necessary to perform an evaluability assessment in
order to assess the extent to which the initiative is ready to be evaluated and what type of evaluation
would be most appropriate. The assessment aims to ascertain the needs, goals and objectives of the
initiative and determine if a formal evaluation is warranted at this point in time or even whether the
initiative is ready to be evaluated.
Common Barriers to Evaluation

It is common for people to have some reservations regarding evaluation of their programs or initiatives
since there are several things that can disrupt the evaluation process or impact on the results. Some of the
most commonly encountered barriers to an effective evaluation are:

 Lack of Management Support


If not initiated or supported by other senior officials, evaluations will often not receive the support
and resources necessary to conduct a proper evaluation. Personnel associated with the initiative are
often expected to collect additional data and perform the analyses in addition to their original duties.
The result is often a poorly constructed and conducted evaluation with unreliable information on
which to base decisions.
 Lack of Skills and Resources
Lack of skills and resources in the collection, analysis, and interpretation of data fosters incomplete or
inaccurate evaluation results. Many organizations simply do not have the necessary time, systems or
personnel with the skills to construct interview or data collection instruments, or analyze and interpret
the data. This could result in false assumptions and conclusions from inaccurate, missing or irrelevant
data.
 Lack of Relevant Data
Lack of relevant data can render the evaluation useless. Unless data collection instruments and
methods are carefully planned from the beginning of the evaluation period, missing, inconsistent, and
untimely data will result in an incomplete evaluation. Certain evaluation questions will not be
answered and the inaccurate data can render the results meaningless.
 Fear of Consequences
Fear that the results of the evaluation will suggest or recommend elimination of or significant changes
to the initiative will often become a detriment to the process. More realistically, an evaluation, even
when negative, will usually lead to the improvement or refinement of the existing program.

Promoting Proper Evaluation

A culture of effective evaluation can be fostered by encouraging the participation of employees and
primary stakeholders in the development process, making the scope and purpose of the evaluation clear
from the start and providing sufficient support from managers and supervisors. The creation of proper and
easy to use evaluation tools will also ensure the process goes much more smoothly.
Steps of Program Evaluation

Program evaluation defined

It is the “systematic collection and analysis of information related to the design, implementation, and
outcomes of a program, for the purpose of monitoring and improving the quality and effectiveness of
the program.” (ACGME 2010a). As is clear in this definition, program evaluation is
about understanding the program through a routine, systematic, deliberate gathering of information to
uncover and/or identify what contributes to the “success” of the program and what actions need to be
taken in order to address the findings of the evaluation process (Durning & Hemmer 2010). In other
words, program evaluation tries to identify the sources of variation in program outcomes both from within
and outside the program, while determining whether these sources of variation or even the outcome itself
are desirable or undesirable. The model used to define the evaluation process shapes that work.

Information necessary for program evaluation is typically gathered through measurement processes.
Choices of specific measurement tools, strategies, or assessments for program evaluation processes are
guided by many factors, including the specific evaluation questions that define the desired understanding
of the program's success or shortcomings. In this Guide, we define “assessments” as measurements
(assessment = assay) or the strategies chosen to gather information needed to make a judgment. In many
medical education programs data from trainee assessments are important to the program evaluation
process. There are, however, many more assessments (measurements) that may be necessary for the
evaluation process, and they may come from a variety of sources in addition to trainee performance
data. Evaluation, as noted earlier, is about reviewing, analyzing, and judging the importance or value of
the information gathered by all these assessments.

1. Planning

The relevant questions during evaluation planning and implementation involve determining the feasibility
of the evaluation, identifying stakeholders, and specifying short- and long-term goals. For example, does
the program have the clarity of objectives or transparency in its methods required for evaluation? What
criteria were used to determine the need for the program? Questions asked during evaluation planning
also should consider the program’s conceptual framework or underpinnings. For example, does a
proposed community-engaged research program draw on “best practices” of other programs, including
the characteristics of successful researcher-community partnerships? Is the program gathering
information to ensure that it works in the current community context?

Defining and identifying stakeholders is a significant component of the planning stage. Stakeholders are
people or organizations that have an interest in or could be affected by the program evaluation. They can
be people who are involved in program operations, people who are served or affected by the program, or
the primary users of the evaluation. The inclusion of stakeholders in an evaluation not only helps build
support for the evaluation but also increases its credibility, provides a participatory approach, and
supplies the multiple perspectives of participants and partners (Rossi et al., 2004).

Stakeholders might include community residents, businesses, community-based organizations,


schools, policy makers, legislators, politicians, educators, researchers, media, and the public. For
example, in the evaluation of a program to increase access to healthy food choices in and near schools,
stakeholders could include store merchants, school boards, zoning commissions, parents, and students.
Stakeholders constitute an important resource for identifying the questions a program evaluation should
consider, selecting the methodology to be used, identifying data sources, interpreting findings, and
implementing recommendations (CDC, 1999).

Once stakeholders are identified, a strategy must be created to engage them in all stages of the evaluation.
Ideally, this engagement takes place from the beginning of the project or program or, at least, the
beginning of the evaluation. The stakeholders should know that they are an important part of the
evaluation and will be consulted on an ongoing basis throughout its development and implementation.
The relationship between the stakeholders and the evaluators should involve two-way communication,
and stakeholders should be comfortable initiating ideas and suggestions. One strategy to engage
stakeholders in community programs and evaluations is to establish a community advisory board to
oversee programs and evaluation activities in the community. This structure can be established as a
resource to draw upon for multiple projects and activities that involve community engagement.

An important consideration when engaging stakeholders in an evaluation, beginning with its planning, is
the need to understand and embrace cultural diversity. Recognizing diversity can improve the evaluation
and ensure that important constructs and concepts are measured.

2.Implementation — Formative and Process Evaluation

Evaluation during a program’s implementation may examine whether the program is successfully
recruiting and retaining its intended participants, using training materials that meet standards for accuracy
and clarity, maintaining its projected timelines, coordinating efficiently with other ongoing programs and
activities, and meeting applicable legal standards. Evaluation during program implementation could be
used to inform mid-course corrections to program implementation (formative evaluation) or to shed light
on implementation processes (process evaluation).

For community-engaged initiatives, formative and process evaluation can include evaluation of the
process by which partnerships are created and maintained and ultimately succeed in functioning.

3. Completion — Summative, Outcome, and Impact Evaluation

Following completion of the program, evaluation may examine its immediate outcomes or long-term
impact or summarize its overall performance, including, for example, its efficiency and sustainability. A
program’s outcome can be defined as “the state of the target population or the social conditions that a
program is expected to have changed,” (Rossi et al., 2004, p. 204). For example, control of blood glucose
was an appropriate program outcome when the efficacy of empowerment-based education of diabetes
patients was evaluated (Anderson et al., 2009). In contrast, the number of people who received the
empowerment education or any program service would not be considered a program outcome unless
participation in and of itself represented a change in behavior or attitude (e.g., participating in a program
to treat substance abuse). Similarly, the number of elderly housebound people receiving meals would not
be considered a program outcome, but the nutritional benefits of the meals actually consumed for the
health of the elderly, as well as improvements in their perceived quality of life, would be appropriate
program outcomes (Rossi et al., 2004). Program evaluation also can determine the extent to which a
change in an outcome can be attributed to the program. If a partnership is being evaluated, the
contributions of that partnership to program outcomes may also be part of the evaluation.

Once the positive outcome of a program is confirmed, subsequent program evaluation may examine the
long-term impact the program hopes to have. For example, the outcome of a program designed to increase
the skills and retention of health care workers in a medically underserved area would not be represented
by the number of providers who participated in the training program, but it could be represented by the
proportion of health care workers who stay for one year. Reduction in maternal mortality might constitute
the long-term impact that such a program would hope to effect (Mullan, 2009).
4. Dissemination and Reporting

To ensure that the dissemination and reporting of results to all appropriate audiences is accomplished in a
comprehensive and systematic manner, one needs to develop a dissemination plan during the planning
stage of the evaluation. This plan should include guidelines on who will present results, which audiences
will receive the results, and who will be included as a coauthor on manuscripts and presentations.

Dissemination of the results of the evaluation requires adequate resources, such as people, time, and
money. Finding time to write papers and make presentations may be difficult for community members
who have other commitments (Parker et al., 2005). In addition, academics may not be rewarded for
nonscientific presentations and may thus be hesitant to spend time on such activities. Additional resources
may be needed for the translation of materials to ensure that they are culturally appropriate.

Although the content and format of reporting may vary depending on the audience, the emphasis should
be on full disclosure and a balanced assessment so that results can be used to strengthen the program.
Dissemination of results may also be used for building capacity among stakeholders.

DOCUMENTATION AND REPORTING


1. Family health Records
2. Community Profile

WORKING WITH GROUPS TOWARD COMMUNITY DEVELOPMENT


A. Stages of Group Development

1. Orientation Stage
 Tasks confronting group members during initial stage
 Determine a way of achieving the purpose for which they joined the group
 Find a place for themselves in the group
  The main concern of the members is whether they are “in” or “out”, liked and respected or
ignored and rejected in the group
 Members are careful with their choice of words because they are not sure how they will
be taken by others in the group
 Search for similarities
 Giving and seeking advice
 Grouping, testing, reluctant group

2. Conflict Stage
 Characterized by the group’s concern over dominance, control and power
 Each member tries to establish for himself his preferred amount of initiative and power,
and gradually a control hierarchy within the group established
 Members become judgmental.
 Source of resentment toward the leader stems from the gradual recognition by each member
that he will not be the leader’s favorite “group member”
3. Cohesiveness Stage
 The group gradually develops into a cohesive unit.
  There is an increase of the morale and trust as members feel group belongingness.
  The chief concern of the group is with intimacy and closeness.
  The group suppresses all expressions of negative feelings in order to maintain a harmonious
environment

4. Work Group Stage


 The uniqueness of the members and the leaders are seen and expected
 Members accept other’s differences without associating “good” and “bad” with the
differences.
 Consensus is reached from a rational discussion rather than a compulsive attempt at
unanimity.
 From a sense of group identity comes a sense of individual identity

5. Termination Stage
 Members of groups experience a sense of ending.
  The key emotions are joy and sadness
  There is emotional coping with the loss of valued personal relationships
 Tasks of the group at this stage:
1. Finishing the agenda
2. Establishing key decisions and completing the group products
3. Typing up loose ends and writing off unfinished business

INTERVENTIONS TO FACILITATE GROUP GROWTH


1. Provide necessary orientation, structure and direction
 Preliminary introductions should be made for the group members who do not know each
other yet.
a. Initiated by the leader or facilitator
b. Introduction of one member by a co-member
 Interview of co-members
 Elucidate the group’s goals and purposes
 Members open up when they feel secure in expressing their attitudes and ideas
 Leader encourages productive participation

a. Observe for signs of a member’s efforts to be heard and giving an opportunity to


contribute.
b. Encouraging and supporting members to participate
c. Summarizing and clarifying contributions
d. Not monopolizing the discussion or commenting too frequently
e. Being sensitive in identifying those too eager to talk as they take up all the group’s
time.
2. Process, negotiate and resolve  conflicts to everyone’s satisfaction
 Help members understand the nature of conflicts
 It is a natural part of any relationship and of any group
 It is desirable and extremely valuable for several reasons:
a. It encourages inquiry, promotes objectivity and sharpens analysis
b. It stimulates interest and curiosity, and motivation and energy of group members
c. It reduces the natural tension and frustration of working together
d. Members derive personal benefits from the experiences of conflict
e. It brings information to members about where they are, what is important to them,
how effective the group work can be and how their relationships can be improved
f. Arguments are kept up-to-date when conflicts are handled as they are experienced
by members in the here-and-now.
 Any conflict can be handled constructively by the group:
a. Members should develop specific group norms or rules to create an atmosphere
conducive to healthy management of conflicts.
b. Conflicts are managed effectively by encouraging group cooperation and by
using the problem-solving approach
 Help members go through the coping process of conflict resolution using the problem-
solving approach
 Clarify the basic issues
 Explore ways to settle the conflict:

a. Reduce force that brings the conflict or increase the force that minimize the conflict
b. Integrate the ideas that are similar
c. Help the members analyze if the opposing views can be taken by the group as its
decision
 Implement an agreement satisfactory of the group
 Evaluate the success of the actions taken
 Help members generate new ways of looking at the situation or problem
 Minimize the conflicts by separating ideas, beliefs and concepts from the person advocating
them.
 Role reversal
 Structured exercise (Breaking balloons exercise and connect dots)
 Help member analyze the here-and-now experience
 Keep an eye on how the messages are sent and received not only on the content of the group
discussion*Meta communication – message about the nature of the relationship between
interacting words
 Be an active participant of the here-and-now experience
 Aggressor – deflates status of others by expressing disapproval of their values, acts of feelings
by attacking the group or the problem it is working on or by joking aggressively
 Blocker – tends to be negative and stubbornly resistance
 Recognition seeker – calls attention to self through boasting, reporting on personal achievements,
acting in unusual ways or struggling to prevent being placed in inferior position
 Self- confessor – uses group as audience for expression of personal, non-group oriented feelings,
insights, ideology
 Playboy – displays look of involvement in group’s horse play and other more-or-less studied
forms of irrelevant behavior
 Nominator – tries to assert authority by engaging in flattery ,claiming superiority status or right
to attention, giving directions authoritatively and interrupting contributions to others
 Help seeker – attempts to evoke sympathy response from other group members of from the whole
group, through expressions of insecurity, personal confusion or self-depreciation beyond
reason
 Special interest pleader – speaks for some underdog usually cloaking down prejudices or biases
in stereotype the best fits own individual need

3. Be aware of the effects of own behavior in the group: use the self for group growth
4. Act as group’s completer/resource person
5. Derive opportunities to apply learning on another situation

You might also like