Greater Good Blog

To Maximize Impact, Evaluate More than Just Outcomes

Julie Slay

Often it’s not enough just to know what works. We have to understand how people make it work if we want to increase our impact or replicate our success.

It’s a lesson I’ve learned repeatedly in my years of conducting evaluations. It’s also the main idea behind a recent article in the Stanford Social Innovation Review by Lehn Benjamin and David Campbell, two professors who study the use of evaluation in philanthropy. They argue that looking only at programs and their direct outcomes—the focus of many evaluations—can be misleading. Funders, they suggest, should also recognize and better understand the value frontline service providers often add in the course of implementing programs, from building relationships to connecting program participants with other resources. Based on my own experience, I would go a step further. I would argue that how grantees work to implement a program often has important effects both on the outcomes the program envisions from the start and on unintended outcomes that may ultimately prove just as important as the intended ones.


The implication for evaluators is clear: it’s not enough just to know the outcomes. You also need to know how people produced them.

I have seen this play out in many ways. To take just one example: An after-school program had examined its student outcomes on an annual basis for over five years and confirmed that it was achieving impact. However, administrators knew that some school sites had fewer hiccups and better student attendance than others. Site leads had always had the latitude to tailor the after-school program to community needs, but the organization wondered whether student outcomes were influenced by how each school was implementing the program. It surveyed over 50 of its after-school sites and found that student outcomes were indeed associated with program implementation. Students at after-school sites that had established strong relationship with the schools where the programs took place—and that had greater access to rooms and equipment at those schools—demonstrated higher academic performances and fewer problem behaviors. Parent involvement in the program was also influential in determining how students fared.

By looking closely at what the people implementing the program most successfully had done, this evaluation enabled the after-school administrators both to understand critical elements of the program and to enhance its effectiveness through activities that might otherwise have been seen as ancillary—activities like building relationships with schools and parents. Administrators were able to start targeting areas for improvement at those sites where the relationship between the after-school program and the school was weak or challenged. They created a simple annual survey that administrators could easily use to identify strong sites and encourage them to share their knowledge with those that were struggling or facing challenges. The administrators could provide training for after-school site leads to help build relationships with their host schools and teach them family engagement activities to encourage parent participation. The administrators could also share their program model with others, in detail, and clarify how to achieve better personal, social, and academic outcomes for students while ensuring the safety of children after school.

None of this is to suggest that the core work of evaluating the connections between programs—or even elements of programs—and outcomes is unimportant. On the contrary, the point here is that, in many cases, we have to go deeper if we want to use evaluation to maximize impact. Funders looking to improve the work of individual grantees and scale their own programs shouldn’t just assume that they can isolate the elements most responsible for a project’s success and build upon or replicate them. If we want to confidently scale successful approaches and generate the same results, we also have to invest time and resources exploring exactly how programs relate to outcomes through the work of the people who implement them.

To tease out the most effective pieces of a program, we need to ask ourselves: How do we think the outcomes were supposed to be achieved? What new issues were introduced once the program was up and running, and how might they have affected the outcomes? What were the assumptions we made about how the program should work? Did they pan out in practice, or were other factors related to implementation critical?

In other words: how did the real people who did this work get it done?

 

Julie Slay leads Arabella’s evaluation practice, employing a range of methodologies and tools to help clients understand the effectiveness of their grants and other investments—and ultimately determine how they can best use their resources to achieve the outcomes they seek. She directs teams using both qualitative and quantitative evaluation approaches to conduct developmental, formative, and retrospective evaluations, as well as to develop evaluation frameworks, tools, and instruments that enable ongoing learning, effective monitoring, and practical program management.

 

 

Back to Blog