If you’re a nonprofit leader and you’ve ever applied for a grant, you know the grant writing process includes answering this question: “How will you evaluate your project.” Funders want to know what difference your organization makes, and you are expected to provide data that demonstrates your organization’s impact.
That, in essence, is the point of program evaluation, which is the process of collecting, analyzing, and using information about a program’s activities to assess the program’s effectiveness and efficiency. Funders look for evidence that your program is achieving the intended outcomes, to get confirmation that their contribution was money well spent.
How do you know that your program is effective? This is perhaps the most important question a nonprofit can answer for themselves, the people their program serves, and the donors who support their work. The best way to answer this question is to understand, measure, and communicate the value of your program by evaluating your program’s outcomes.
It’s not enough to simply provide number of people served (e.g., “16 students enrolled in our after-school reading program”). These are outputs. Outputs measure simple metrics such as number of participants, service hours delivered, modules completed, etc. Tracking outputs enables you to state what services you provided and for how many people. Outputs are necessary data in your evaluation, but you shouldn’t stop there if you want to satisfy grant makers.
Outcomes measure the changes you expect to see. What has changed for participants as a result of your program’s activities? Outcomes are generally expressed in terms of changes in knowledge, skills, attitude, and/or behavior. The question of “what impact is our program having” should serve as a guide in program evaluation. You can then align the difference you expect to make with what you measure and report.
RELATED: Download our free Guide to Program Evaluation.
In a program evaluation, you (1) identify the changes you want to make, (2) gather data to measure those changes, and (3) report what the data say about the expected change. Through this process, you can provide professional-level evidence to funders that you’ve achieved what you said you would when you wrote the grant request. Your program made a difference, and you want to provide the data that clearly demonstrates that to stakeholders.
Now let’s examine the steps in a program evaluation for a hypothetical nonprofit program that provides reading instruction to high-school students with mild learning disabilities in an after-school setting. We’ll call it “Read2Achieve” or R2A.
An effective program evaluation should start with a logic model. A logic model is a visual roadmap that demonstrates the outcomes you expect from a specific activity or group of activities.
R2A stated their outputs as: (1) number of students enrolled, classrooms utilizes, and teacher/facilitator staff employed; (2) number of modules completed; and (3) number of assessments completed and student scores on those assessments.
R2A’s outcomes included: (1) students’ increased reading comprehension and language development, (2) increased student academic self-efficacy. R2A determined that student academic self-efficacy, or the attitudes toward their ability to achieve academic success, is important for overall school success, even outside their R2A program, and it is therefore a key outcome for their program.
Once you’re clear on your program’s intended outcomes, you now create data collection tools (e.g., surveys, focus groups) that align with the outputs and outcomes you identified in the logic model. Data about outputs are typically collected through spreadsheets or databases (e.g., what services you provide and for how many), and outcome data requires gathering information from program participants. Participants include people who are receiving services directly (e.g., students) and those who can observe changes in participants (e.g., teachers/facilitators).
R2A decided to collect data through pre- and post-surveys with all students in their program (they currently operate programs in five schools in the county). One survey item, for example, asks students to respond to this question:
Because of this program, do you feel better able to be successful in school, less able, or about the same? Circle one:
a. Better able
b. Less able
c. About the same, still able to be successful
d. About the same, still not able to be successful
Any comments about your answer (optional):
The report for R2A highlighted the following outputs:
Outcome data showed an average of 1 grade level increase in reading over the 10-week program, and that over 87% of participants feel better able to achieve overall academic success.
The combination of illustrating who participated and the program’s impacts provides a comprehensive picture of the degree to which R2A is creating a pathway to overall academic success.
This R2A example illustrates how to define outputs and outcomes which can then be aligned with data collection and reporting in a program evaluation. It required thoughtful collaboration to define and measure outcomes and outputs as defined in the evaluation plan. These anchor points served as a springboard for creating data collection tools to gather data aligned to the program’s outputs and outcomes. The student pre- and post-surveys and teacher/facilitator surveys focused on gathering data aligned to those outcomes. Outputs tracked activities, providing them with a holistic picture of what went well, what needed improvement, and the overall impact. The data then funneled into a compelling report, which also aligned to the original outcomes and outputs, empowering them to share with confidence the difference R2A makes, as well as opportunities to improve.
If you are a nonprofit leader faced with evaluating your program’s effectiveness, we hope this brief article has provided you with the foundational knowledge you need to get started. If you’d like help with the process of planning and implementing a full-scale program evaluation, consider making use of the consultants at LaBarbera Learning Solutions. We’re an experienced team of researchers, evaluators, and educators with the expertise needed to demonstrate your program’s impact to stakeholders. See our cost-effective solutions at www.labarberalearning.com