The Tool Box needs your help
to remain available.
Your contribution can help change lives.
Donate now.
Seeking supports for evaluation?
Learn more.
Learn the four main steps to developing an evaluation plan, from clarifying objectives and goals to setting up a timeline for evaluation activities. |
After many late nights of hard work, more planning meetings than you care to remember, and many pots of coffee, your initiative has finally gotten off the ground. Congratulations! You have every reason to be proud of yourself and you should probably take a bit of a breather to avoid burnout. Don't rest on your laurels too long, though--your next step is to monitor the initiative's progress. If your initiative is working perfectly in every way, you deserve the satisfaction of knowing that. If adjustments need to be made to guarantee your success, you want to know about them so you can jump right in there and keep your hard work from going to waste. And, in the worst case scenario, you'll want to know if it's an utter failure so you can figure out the best way to cut your losses. For these reasons, evaluation is extremely important.
There's so much information on evaluation out there that it's easy for community groups to fall into the trap of just buying an evaluation handbook and following it to the letter. This might seem like the best way to go about it at first glance-- evaluation is a huge topic and it can be pretty intimidating. Unfortunately, if you resort to the "cookbook" approach to evaluation, you might find you end up collecting a lot of data that you analyze and then end up just filing it away, never to be seen or used again.
Instead, take a little time to think about what exactly you really want to know about the initiative. Your evaluation system should address simple questions that are important to your community, your staff, and (last but never least!) your funding partners. Try to think about financial and practical considerations when asking yourself what sort of questions you want answered. The best way to insure that you have the most productive evaluation possible is to come up with an evaluation plan.
As soon as possible! The best time to do this is before you implement the initiative. After that, you can do it anytime, but the earlier you develop it and begin to implement it, the better off your initiative will be, and the greater the outcomes will be at the end.
Remember, evaluation is more than just finding out if you did your job. It is important to use evaluation data to improve the initiative along the way.
We'd all like to think that everyone is as interested in our initiative or project as we are, but unfortunately that isn't the case. For community health groups, there are basically three groups of people who might be identified as stakeholders (those who are interested, involved, and invested in the project or initiative in some way): community groups, grantmakers/funders, and university-based researchers. Take some time to make a list of your project or initiative's stakeholders, as well as which category they fall into.
Each type of stakeholder will have a different perspective on your organization as well as what they want to learn from the evaluation. Every group is unique, and you may find that there are other sorts of stakeholders to consider with your own organization. Take some time to brainstorm about who your stakeholders are before you being making your evaluation plan.
While some information from the evaluation will be of use to all three groups of stakeholders, some will be needed by only one or two of the groups. Grantmakers and funders, for example, will usually want to know how many people were reached and served by the initiative, as well as whether the initiative had the community -level impact it intended to have. Community groups may want to use evaluation results to guide them in decisions about their programs, and where they are putting their efforts. University-based researchers will most likely be interested in proving whether any improvements in community health were definitely caused by your programs or initiatives; they may also want to study the overall structure of your group or initiative to identify the conditions under which success may be reached.
You and your stakeholders will probably be making decisions that affect your program or initiative based on the results of your evaluation, so you need to consider what those decisions will be. Your evaluation should yield honest and accurate information for you and your stakeholders; you'll need to be careful not to structure it in such a way that it exaggerates your success, and you'll need to be really careful not to structure it in such a way that it downplays your success!
Consider what sort of decisions you and your stakeholders will be making. Community groups will probably want to use the evaluation results to help them find ways to modify and improve your program or initiative. Grantmakers and funders will most likely be making decisions about how much funding to give you in the future, or even whether to continue funding your program at all (or any related programs). They may also think about whether to impose any requirements on you to get that program (e.g., a grantmaker tells you that your program may have its funding decreased unless you show an increase of services in a given area). University-based researchers will need to decide how they can best assist with plan development and data reporting.
You'll also want to consider how you and your stakeholders plan to balance costs and benefits. Evaluation should take up about 10--15% of your total budget. That may sound like a lot, but remember that evaluation is an essential tool for improving your initiative. When considering how to balance costs and benefits, ask yourself the following questions:
The first step is to clarify the objectives and goals of your initiative. What are the main things you want to accomplish, and how have you set out to accomplish them? Clarifying these will help you identify which major program components should be evaluated. One way to do this is to make a table of program components and elements.
For our purposes, there are four main categories of evaluation questions. Let's look at some examples of possible questions and suggested methods to answer those questions. Later on, we'll tell you a bit more about what these methods are and how they work
Once you've come up with the questions you want to answer in your evaluation, the next step is to decide which methods will best address those questions. Here is a brief overview of some common evaluation methods and what they work best for.
Monitoring and feedback system
This method of evaluation has three main elements:
Member surveys about the initiative
When Ed Koch was mayor of New York City, his trademark call of "How am I doing?" was known all over the country. It might seem like an overly simple approach, but sometimes the best thing you can do to find out if you're doing a good job is to ask your members. This is best done through member surveys. There are three kinds of member surveys you're most likely to need to use at some point:
If you want to know whether your proposed community changes were truly accomplished-- and we assume you do--your best bet may be to do a goal attainment report. Have your staff keep track of the date each time a community change mentioned in your action plan takes place. Later on, someone compiles this information (e.g., "Of our five goals, three were accomplished by the end of 1997.")
Behavioral surveys help you find out what sort of risk behaviors people are taking part in and the level to which they're doing so. For example, if your coalition is working on an initiative to reduce car accidents in your area, one risk behavior to do a survey on will be drunk driving.
Key participants - leaders in your community, people on your staff, etc. - have insights that you can really make use of. Interviewing them to get their viewpoints on critical points in the history of your initiative can help you learn more about the quality of your initiative, identify factors that affected the success or failure of certain events, provide you with a history of your initiative, and give you insight which you can use in planning and renewal efforts.
These are tested-and-true markers that help you assess the ultimate outcome of your initiative. For substance use coalitions, for example, the U.S. Centers for Substance Abuse Prevention (CSAP) and the Regional Drug Initiative in Oregon recommend several proven indicators (e.g., single-nighttime car crashes, emergency transports related to alcohol) which help coalitions figure out the extent of substance use in their communities. Studying community-level indicators helps you provide solid evidence of the effectiveness of your initiative and determine how successful key components have been.
When does evaluation need to begin?
Right now! Or at least at the beginning of the initiative! Evaluation isn't something you should wait to think about until after everything else has been done. To get an accurate, clear picture of what your group has been doing and how well you've been doing it, it's important to start paying attention to evaluation from the very start. If you're already part of the way into your initiative, however, don't scrap the idea of evaluation altogether--even if you start late, you can still gather information that could prove very useful to you in improving your initiative.
Outline questions for each stage of development of the initiative
We suggest completing a table listing:
With this table, you can get a good overview of what sort of things you'll have to do in order to get the information you need.
When do feedback and reports need to be provided?
Whenever you feel it's appropriate. Of course, you will provide feedback and reports at the end of the evaluation, but you should also provide periodic feedback and reports throughout the duration of the project or initiative. In particular, since you should provide feedback and reports at meetings of your steering committee or overall coalition, find out ahead of time how often they'd like updates. Funding partners will want to know how the evaluation is going as well.
When should evaluation end?
Shortly after the end of the project - usually when the final report is due. Don't wait too long after the project has been completed to finish up your evaluation - it's best to do this while everything is still fresh in your mind and you can still get access to any information you might need.
You'll probably also include specific tools (i.e., brief reports summarizing data), annual reports, quarterly or monthly reports from the monitoring system, and anything else that is mutually agreed upon between the organization and the evaluation team.
Now that you've decided you're going to do an evaluation and have begun working on your plan, you've probably also had some questions about how to ensure that the evaluation will be as fair, accurate, and effective as possible. After all, evaluation is a big task, so you want to get it right. What standards should you use to make sure you do the best possible evaluation? In 1994, the Joint Committee on Standards for Educational Evaluation issued a list of program evaluation standards that are widely used to regulate evaluations of educational and public health programs.The standards the committee outlined are for utility, feasibility, propriety, and accuracy. Consider using evaluation standards to make sure you do the best evaluation possible for your initiative.
Contributor Chris HamptonOnline Resource
The Action Catalogue is an online decision support tool that is intended to enable researchers, policy-makers and others wanting to conduct inclusive research, to find the method best suited for their specific project needs.
CDC Evaluation Resources provides an extensive list of resources for evaluation, as well as links to key professional associations and key journals.
Developing an Evaluation Plan offers a sample evaluation plan provided by the U.S. Department of Housing and Urban Development.
Developing an Effective Evaluation Plan is a workbook provided by the CDC. In addition to ample information on designing an evaluation plan, this book also provides worksheets as a step-by-step guide.
Evaluating Your Community-Based Program is a handbook designed by the American Academy of Pediatrics and includes extensive material on a variety of topics related to evaluation.
GAO Designing Evaluations is a handbook provided by the U.S. Government Accountability Office. It contains information about evaluation designs, approaches, and standards.
The Magenta Book - Guidance for Evaluation provides an in-depth look at evaluation. Part A is designed for policy makers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some straightforward steps that policy makers can take to make a good evaluation of their intervention more feasible. Part B is more technical, and is aimed at analysts and interested policy makers. It discusses in more detail the key steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.
Plan an Evaluation is an extensive guide provided by MEERA aimed at providing detailed information on planning an evaluation.
Using Data as an Equity Tool is an Urban Institute resource which provides strategies and key practices which place-based organizations can use to build local data capacity with their partners, improve service provision and day-to-day operations, and amplify community voices.
Print Resources
Argyris, C., Putnam, R., & Smith, D. (1990). Action Science, Chapter 2, pp. 36-79. San Francisco: Jossey-Bass.
Fawcett, S., in collaboration with Francisco, V., Paine-Andrews, A., Lewis, R., Richter, K., Harris, K., Williams, E., Berkley, J., Schultz, J., Fisher, J., & Lopez, C. (1993). Work group evaluation handbook: Evaluating and supporting community initiatives for health and development. Lawrence, KS: Work Group on Health Promotion and Community Development, The University of Kansas.
Fawcett, S., Sterling, T., Paine, A., Harris, K., Francisco, V., Richter, K., Lewis, R., & Schmid, T. (1995). Evaluating community efforts to prevent cardiovascular diseases. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion.
Francisco, V., Fawcett, S., & Paine, A. (1993). A method for monitoring and evaluating community coalitions. Health Education Research: Theory and Practice, 8(3), 403-416.
Fetterman. (1996). Empowerment evaluation: An introduction to theory and practice. In D.M. Fetterman, S. J. Kaftarian, & A. Wandersman (eds.), Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability, (3-46).
Green, L., & Kreuter, M. (1991). Evaluation and the accountable practitioner. Health promotion planning, (2nd Ed.), (pp. 215-260). Mountain View, CA: Mayfield Publishing Company.
Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards. Evaluation Practice, 15, 334-336.