![Need Larger Text?](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/template_gfx/larger-text_e.jpg)
|
|
APPENDIX 3:
Evaluation Guide for Fall Prevention Programs
Falls Prevention Program Evaluation
I. What is Evaluation?
Evaluation is the process of determining whether programs or specific
aspects of programs are effective, efficient, appropriate or meaningful,
and if not, how to make them so. Additionally, evaluation will produce
the information to show if a program has unexpected benefits or problems.
Evaluation should be considered a critical and integral component of
any program 1 that is included from the planning stages onward. It can
include a wide variety of methods to evaluate many aspects of the program.
There are numerous books, manuals, web pages and other materials that
provide in-depth information on conducting evaluations.
However, one does not have to be an expert to carry out a useful evaluation.
Programs that demonstrate through evaluation a high probability of success
have a much greater chance of receiving community, regional, financial
and legislative support.
II. Purpose of Evaluation ![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
Many people believe that evaluation is about proving the success or failure
of a program. This is not necessarily the case. Evaluation enables you
to provide the best possible programming in your community. It helps you
learn from mistakes, modify steps of your program as you progress and
ultimately determine if you have reached your final goal. Without evaluation
you cannot tell if the program is actually helping the people you are
targeting. Evaluation provides opportunities for those involved, both
participants and programmers, to have important input in the programming
process. Participants' experiences are valuable sources of information.
By asking them what worked and what did not, and what they would
recommend for future programming, future pitfalls can be avoided and valuable
insights gained. It also provides participants with an important messagetheir
opinions and experiences count! Similarly, involving programming staff
in the evaluation process provides an opportunity to let them know that
their work is making a difference!
III. Why Evaluate Fall and Injury Prevention Programs?
![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
Evaluation to many is seen as something that takes away scarce dollars
and resources from service delivery with no real benefits to the program.
However, properly conducted evaluations can serve many useful functions
that ultimately can greatly improve a program. Some of the potential benefits
of evaluation are:
- accounting for what has been accomplished through project funding;
- promoting learning about which health promotion strategies work in
communities and which do not;
- contributing to the body of knowledge about injury prevention;
- learning whether proposed program materials are suitable for your
intended audience;
- learning whether program plans are feasible before they are put into
effect;
- learning whether programs are producing the desired results;
- learning whether programs have unexpected benefits or problem;.
- enabling planners and providers to improve and build on existing program
services;
- producing data, which will inform future programs/policy;
- learning whether specific aspects of the program need to be changed
while others should remain;
- demonstrating effectiveness to the community, to current or potential
funders, or to those who want to conduct similar programs; and,
- contributing to policy development.
1 Because of the terminology typically used around evaluation,
in this section we refer to program evaluation.
However, many of the same issues and principles would apply to policy
evaluation.
|
IV. Components of an Evaluation ![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
A. Basic Steps:
Every program evaluation should include the following:
1. A clear and specific statement defining the objectives of the evaluation.
This includes measurable project goals that outline what the project plans
to accomplish. Although this may seem self-evident, many evaluations have
gone off-track because this initial work has not been done. The more focused
you are about what you want to examine, the more efficient you can be
in your evaluation, the shorter the time it will take you and ultimately
the less it will cost.
2. A defined target population and a comparison group (this could be
one group with a pre- and post-test used to determine a difference). Be
as specific as possible.
3. A written outline of the type of information to be collected and how
that information relates to your program's objectives. This might include:
what information the project needs to collect, who has the information
and how the information will be collected.
4. A method for the collection of information that is suitable for the
objectives of the evaluation and that will produce the type of information
you are looking for.
5. A plan for designing and testing the instruments you will use to collect
the information (i.e., are the instruments written in appropriate language
for your participants, feasible to administer in the time available, and
do they capture the required information?).
6. Collection of information from the members of the target population.
7. Recording of the information in a form that makes it possible to analyze.
8. Analysis of the recorded information.
9. Written evaluation report describing the evaluation's results.
Planning Your Program Evaluation ![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
Ideally, evaluation is an integral part of the programming cycle that
begins as soon as the idea for a fall or injury prevention program is
conceived, interweaves with program activities throughout the life of
the program and either ends after the program is finished or continues
on to determine if the program has effects over a sustained period of
time.
A brief overview, which might help you prepare the evaluation, introduces
the evaluation process to colleagues, and guides your evaluation report,
includes examining five key evaluation questions:
- Did we do what we said we would do?
- What did we learn about what worked and what did not?
- What difference did it make that we did this work?
- What could we do differently?
- How do we plan to use evaluation findings for continuous learning?
The cost will depend on the type of evaluation required, however a good
rule-of-thumb is to budget 10% of your total program costs for evaluation.
Further, you may want to consider bringing in or hiring a program evaluation
consultant to help develop, implement, and summarize your evaluation.
Types of Evaluation ![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
There are many different evaluation approaches: needs assessments, accreditation,
cost/benefit analysis, effectiveness, efficiency, formative, summative,
goal based, process, outcome, etc. However, there are a few basic types
of information that are most common:
- Formative Evaluation This helps to answer: Is the program
we are proposing likely to be effective among our intended target audience?
- When to use: during the development of a new program or when an existing
program is being modified; has existing problems with no obvious solutions
or is being used with a new population.
- What it shows: whether proposed strategies are likely to reach, be
understood by, and accepted by your target audience.
- Why is it useful: maximizes likelihood that the program will be successful
and allows programmer to make revisions before full effort is underway.
- Process-Based Evaluation - This helps to answer: How well is
the program really working and what are its strengths and weaknesses?
- When to use: as soon as program begins.
- What it shows: how well a program is working or going according to
initial plan.
- Why is it useful: identifies early problems and helps evaluate how
well strategies, and materials are working.
- Impact Evaluation This helps to answer: To what degree
did the program meet its (intermediate) goals?
- When to use: after the program has run through a cycle with contact
from members of target audience.
- What it shows: the degree to which program met its intermediate goals
(i.e., how many people changed their behaviour or environment?).
- Why is it useful: provides useful planning data to programmers for
future planning and funding purposes.
- Outcome-Based Evaluation - This helps to answer: What benefits
should participants of my program expect?
- When to use: when program is complete.
- What it shows: the degree to which the program has an effect on the
health outcomes of participants.
- Why is it useful: provides evidence of success for future planning,
funding, and health
promotion.
Action Plan Checklist for Program Planning and Evaluation ![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
The following is a checklist to be used for program planning and evaluation
- First investigate to determine if an effective program similar to
your idea already exists either in your community or somewhere else.
A good source of information may be the companion document to this report,
entitled: A National Inventory of Canadian Falls Prevention Programs
(2001).
- If a similar program does exist, talk with the program coordinator
and read the evaluation report. Tailor the program as necessary to meet
your needs.
- Decide where you will seek financial support;
find out which community, regional, provincial or federal agencies provide
grants for the type of program you are planning; and,
service clubs (i.e., Rotary, Lions etc.) or business (banks or credit
unions) will often support your program.
- Decide where you will seek non-financial support:
find out which regional or provincial agencies provide technical assistance
for the type of program you are planning (i.e., Regional Health Authority,
BC Injury Research and Prevention Unit, Adult Injury Management Office
at the University of Victoria Centre on Aging, Office for Injury Prevention,
BC Ministry of Health); and,
find out which community and business groups will support your program
and provide some sort of support (i.e., local Fire Department, Public
Health Unit).
- Develop an outline of a plan for your fall and/or injury prevention
program. Remember to plan your evaluation at this stage. Evaluate the
outline. Talk to a small number of people you will try to reach with
your fall and/or injury prevention program. Consult people who have
experience with programs similar to the one you envision. Ask them to
review your plan and modify based on their feedback.
- Develop a plan to enlist financial, technical and other support you
need from the agencies, businesses or community groups you have identified.
Use your outline of the program plan to demonstrate your planning, commitment
and expertise.
- Put your plan for enlisting support into action. Keep track of contacts
that you make, their feedback and commitments of support.
- If unexpected problems arise while seeking support, re-evaluate. Determine
why support is not forthcoming and ask for recommendations or modifications
to ensure funding or recommendations of other more appropriate organizations
to target.
- Develop and test your instruments, procedures and materials. Ask a
small number of people from your target group as well as any organizations
that have committed to providing technical support to review your materials
and modify based on feedback.
- Begin program implementation.
- Keep track of program-related contacts, participants, supporters and
critics. Track all items either distributed to or collected from participants
throughout the life of the program.
- If unexpected problems arise while the program is in operation, re-evaluate
to find the cause and solution.
- Use the data you have collected to evaluate how well the program met
its goals.
- Use the results of this evaluation to justify continued funding and
support for your program.
- Share the results of your program and evaluation with others through
newsletters, teleconferences, provincial conferences and other publications.
Other Things to Consider ![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
In addition to the areas covered above, there are other things that can
be considered when developing a new program with the goal of reducing
falls or fall-related injuries. However, three important considerations
include program acceptability, feasibility, and sustainability.
- Acceptability
![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
Reducing falls and fall-related injuries typically requires changing
people's behaviour or environment. Changing behaviour or modifying one's
living area is often difficult for anyone, and seniors are likely to
be as resistant to change as anyone else. One of the barriers in working
with seniors is that they must first accept that something needs to
change, and many seniors tend to relate changes associated with reducing
falls as a sign of their getting old and frail. Such a stigma creates
resistance to change. Change often requires engaging in new behaviours,
such as exercise, for which there are adherence issues, or allowing
intrusive measures such as home modifications. These barriers
to adopting risk factor reduction strategies are further complicated
by seniors seeing these changes as evidence of the aging process. These
barriers point to the great need to consider how to make the program
strategies for change as acceptable as possible to seniors. The type
of intervention, the degree of involvement of the target audience in
forming and implementing the program, and the manner in which information
and the program is presented to seniors can all impact the target population's
ultimate acceptance of the program.
- Feasibility
![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
When planning a new program, one of the considerations should be whether
the program could be successfully implemented. Issues around feasibility
have been discussed throughout this document, particularly concerning
the cost around reducing risk factors and having the necessary human
resources, knowledge, and skills to implement the program. Other factors
that may be taken into consideration around feasibility include ensuring
the correct infrastructure is in place for delivering and managing the
program and ensuring that all the elements are in place to be able to
address the selected risk factors. Feasibility is also an important
component of implementing new policy, as considerations must be made
to ensure
that organizations and individuals have the capacity and resources to
be able to implement and follow the policy. Otherwise, the policy will
likely be ineffective, no matter how well intentioned it may be.
- Sustainability
![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
Sustainability refers to the ability of a program to continue after
the initial start up. Many of the issues concerning feasibility also
apply to sustainability, but there are also other considerations. After
the initial enthusiasm of developing and implementing a new program
has worn off, it is necessary to ensure that the proper management and
infrastructure exist to continue the program. Further, funding to start
a program is often of limited duration, and there is a need to secure
reliable, sustainable funding into the future. Program evaluation is
particularly important in obtaining additional funding, as programs
that can demonstrate that they are rated highly by the target audience
and have a positive impact will likely receive greater interest, consideration,
and support than programs that cannot provide such evidence.
IV. Evaluation References![](/web/20060208054311im_/http://www.phac-aspc.gc.ca/seniors-aines/images/top_e.gif)
Minister of Health & Welfare Canada. (1996). Guide to project
evaluation: a participatory approach. Ottawa, Ontario.
McNamara, C. (2001). Basic Guide to Program Evaluation.
http://www.mapnp.org/library/evaluatn/fnl_eval.htm
Thompson, N. & McClintock, H. (1998). Demonstrating Your
Program's Worth. Report for the National Center for Injury Prevention
and Control. Atlanta, Georgia.
|
|