On the first day of evaluation training, I generally ask afterschool program staff, "What is the first thing you think of when I say evaluation?" They respond with such words as "judging. . . punitive . . . test . . . mandatory . . . painful . . . confusing."
I then ask participants how much experience they’ve had with evaluation. Generally they’ve had a lot. For years they’ve collected accountability information to prove they’re using grant money as specified in proposals and hosted site visits from funders’ program officers; they’ve filled out workshop evaluations and had their own performance evaluated.
No one ever asks me, "What is evaluation?" The definition of program evaluation that my colleague Anita Baker and I use in our evaluation workshops was adapted from Utilization-Focused Evaluation (1997) by Michael Quinn Patton, the former president of the American Evaluation Association
Specific words in this definition have specific applications in afterschool program evaluation.
Problem. Almost every afterschool program I have visited collects an enormous amount of data. However, the programs revise their forms every year, so they can’t compare data over time.
Solution. Get clear about the types of data you want to collect for each child or family. Developing a logic model or theory of change is a good starting point; see Theory of Change or Innovation Network. In a systematic approach, data collection is integrated into program activities and never becomes an end-of-year burden. Develop a system and an evaluation form, and then stick with them for several years.
Problem. Most afterschool programs don’t fully use the data they collect. For instance, staff may review report cards in order to understand each child’s challenges and successes. Then they stash the cards in student files, without comparing data across students.
Solution. Go through student files to see what data you already collect. Develop a spreadsheet or table with a row for each student. The columns are the different types of data: grades, attendance, projects created, and so on. Make sure all records are dated so that you can track progress over time. The Bruner Foundation offers resources on analyzing evaluation data.
Problem. In the past, evaluations focused on assessing the fidelity of a program to its model, asking, for example, about the characteristics and frequency of activities and the number of participants. In more recent years, the questions have focused on outcomes: "To what end? What is the impact on children and families?" You need to ask both types of questions in order to understand how and why certain outcomes are occurring—or not occurring.
Solution. When developing evaluation questions, ask about outcomes as well as about activities and characteristics. Then if, for example, one group of third-graders is progressing faster than others, you can review data about program implementation to find variations in the activities or services provided to that group.
Problem. We’re no longer interested in creating "door stop" evaluations—the ones that weigh so much their only use is to prop open your office door. We want to develop evaluations that actually tell people something meaningful about their programs, so that they can "reduce uncertainties, improve effectiveness, and make decisions."
Solution. Involve as many stakeholders as possible—parents, students, staff, principals, funders—in the evaluation process. Hold a stakeholder meeting at the beginning to determine which outcomes to measure and what questions to ask. When the evaluation meets the needs of all your constituents, they are more likely to buy into the process and use the data.
If I’ve convinced you to be more intentional about evaluation, you probably have some of the same questions afterschool staff often ask me.
While research and evaluation share many of the same methods—surveys, interviews, observations, focus groups—their purposes are different. Research attempts to produce generalizable knowledge. Program evaluation collects information to inform decision making. Researchers therefore choose samples that are representative of larger populations. Program evaluations deal with a pre-determined sample population—participants, staff, parents, and so on. Program evaluation data does not need to statistically significant; the populations are often too small. However, the methods must be rigorous, so that you gather information in many different ways, from many different people, at many points in time.
Program staff often say, "We only have anecdotal information about our program." Sometimes they mean that they have an ad-hoc collection of quotations and stories. Other times, they mean that they have true qualitative data collected systematically through interviews, focus groups, and observations. Qualitative data can be very useful! You can:
Report cards and test scores are designed to measure the impact of school activities on student outcomes. Most afterschool programs create learning environments that are quite different from school. You need to measure factors that correspond to your program goals and objectives. If analysis of your program using your logic model indicates that program activities are likely to have a direct impact on children’s grades or scores—for instance, if you have a daily remedial reading program, not just three hours a week of literacy activities—then use those tools. Otherwise, develop instruments that track the outcomes you’re trying to achieve: love of reading, leadership, critical thinking, or whatever. Funders who ask for evidence of improved school-based outcomes can often be convinced that other outcomes are equally important. You can find many examples of program evaluations and instruments in the Harvard Family Research Project’s Out-of-School Time Program Evaluation Database.
Participatory evaluation has gained popularity because it has proven to be an effective approach to increase programs’ usage of evaluation findings. Depending on program needs, various stakeholders might develop evaluation plans, create instruments, collect and analyze data, write final reports, and give presentations—or they might only create the plan and review the results. When stakeholders are involved in data collection and analysis, they’re more likely to want to use the findings to improve the program. The Collaborative, Participatory, and Empowerment Evaluation website offers definitions and helpful resources.
Being engaged in participatory evaluation can galvanize the staff to make program improvements. Staff members often say they feel empowered by being involved in evaluation; they value the opportunity to develop new approaches for dealing with issues and often feel they have enhanced their own skills and marketability.
This month’s Professional Development feature suggests ways you can start involving program staff in evaluation. You can learn one simple evaluation procedure from Harlem RBI’s example in How I Did It. Explore the websites listed in this article or on our Links page, and if you really want to go into depth, read the book I recommend in this month’s Bookshelf.