Communities

How funders can better evaluate place-based initiatives

We and our funder colleagues make several grants in specific communities each year, but often struggle to understand the cumulative impact of these investments in the community. So Grantmakers for Effective Organizations, known as GEO, recently formed a group of private funders and federal agencies to generate new ideas for evaluating the overall community impact in places where we fund multiple programs.

At a recent gathering in Washington D.C., the group entertained several interesting conversations about evaluating place-based philanthropy. The biggest takeaways pertained to the following questions:

Jonathon Sotsky

      Jeff Coates

  • What constitutes ‘place-based’ philanthropy? The group agreed that ‘place-based’ initiatives center around a geographic location, though this can range from a city neighborhood to an entire region.  However, many felt that digital networks and online communities increasingly undermine the primacy of physical boundaries, as discussed in Knight’s Connected Citizen report.  The group also defined the primary characteristics that define a place, including: individuals (children and families), systems, policies and environment.
  • What should be considered credible evidence? When evaluators assess program impact, should they only use scientific studies that span several years, or should they adopt a more holistic approach that includes interviews, focus groups and trends? Lisbeth Schorr from Harvard’s Center for the Study of Social Policy facilitated a great discussion based on her recently-released paper Expanding the Evidence Universe about going beyond a steadfast reliance on randomized control trials and embracing mixed method approaches.  Most foundations seemingly hold this view, and more often are faced with the opposite problem of demanding any sort of evidence of programmatic impact.  However, federal agencies undoubtedly encounter higher standards for evidence credibility that may be unintentionally stifling innovation and effectiveness by limiting funding options to a scant set of programs validated by academic, long-term research.
  • Who is the audience for the evaluation?  Foundations and agencies struggle with what key questions to ask, and what kind of data to collect. Much of this depends on who the audience is for the report, be it the board of trustees, the CEO or the program staff running the program. The Annie E. Casey Foundation and their evaluation partners shared lessons learned from evaluating the Making Connections initiative.  The evaluation gathered several indicators of community-level changes, but the sites implementing the program struggled to gleam insights from the data that could be applied to manage the program.  Meanwhile, other funders cited pressure to simplify the impact of these very complex initiatives into a handful of quantitative indicators.  Ultimately, these conversations stressed the importance of tightly defining audiences and their needs when planning the evaluation.                                                                                                       

The group will gather regularly over the coming year to exchange lessons and spot areas ripe for ongoing collaboration. National Program Associate Jeff Coates and I will continue to reflect insights about important subjects that the group plans to explore, which include:

  • Identifying common metrics useful for evaluating place-based initiatives.
  • Leveraging scorecards and dashboards to communicate impact.
  • Developing evaluation approaches that build long-term community capacity to measure impact.

Recent Content