Evaluation. Good to Great.

Julie Fry

When assessing the results of the work of arts organizations, do we measure the right things? Can we measure whether the art itself is good? This continues a dialogue that began in GIA Reader, Vol. 17, No. 3.

Bruce Sievers, Skirball Foundation; Diane Ragsdale, The Andrew W. Mellon Foundation (co-presenters, moderators); Suzanne Callahan, Dance USA (interlocutor).

"Perhaps arts organizations count the numbers that are easy to count not only because funders have asked them to, but because it's easier than asking, "Are we doing worthwhile artistic work?" and "Do we matter to this community?"

Thus ended an article on assessing the work of arts organizations from the GIA Reader in winter 2006. The quote, from Diane Ragsdale of the Andrew W. Mellon Foundation, was part of a discussion with the Skirball Foundation's Bruce Sievers. Ragsdale and Sievers started off the GIA conference session on evaluating arts nonprofits with a distillation of his comments about Good to Great and the Social Sectors by Jim Collins. Collins' monograph details a nonprofit perspective of his book Good to Great in which he laid out principles he believes differentiate high-performing companies.

In Sievers' view, Collins' use of a transactional model of inputs and outputs to assess a nonprofit's success has practical problems. With a huge number of variables—foundation funding is only one input in a complex world, after all—and lengthy horizons, the nonprofit sector cannot afford costly, controlled experimentation with inputs and outputs. The philosophical problem is that there is no single bottom line in the world of philanthropy, so we have to define what it will be. As Ragsdale added, “Do you value the artwork itself or the benefit it gives people?” What is most important: The quality of work or how many eyeballs touch it? If we have more leisure time, why are arts audiences on the wane, and what does that say about the benefits of the arts?

Interlocuter Suzanne Callahan from Dance/USA commented that arts nonprofits cannot just show success on a spreadsheet and that systems-thinking has traditionally been missing from debates about evaluation. She listed some truths that have proven effective in evaluating arts nonprofits:

  • Evaluations must remain as central as possible to the mission and context of the program.
  • SurveyMonkey is not always successful in returning useful data (with apologies to the SurveyMonkey folks).
  • Nonprofit arts leaders must acknowledge evaluation results and be willing to act on them.
  • Successful evaluation requires dedicated staff time or outside assistance.

Inevitably, this question arose in the dialogue: Is evaluation even necessary? Assessing program success and the effect of grants is much like raising kids, Sievers said. It is a long-term process with small pieces of data—like the battles over homework and curfews—that may not accurately depict an organization's accomplishments over time. Fewer grantees and deeper relationships may help funders better understand how their support is helping move a grantee, community, or sector toward a positive outcome. But measuring those outcomes can be costly.

Additional thoughts from the funders in the room:

  • Public funding agencies have no option. Politicians look at grant evaluation numbers annually as an indicator of how public money is being spent.
  • There is a sense that grantees should be getting better and better... but at what?
  • Is a foundation responsible for imposing stringent evaluations, or should grantees feel more accountable for giving their audiences good experiences?
  • Some programs serve modest numbers of people but are doing crucial work, and the case needs to be made for the difference a program (or organization) can make.
  • Art isn't just about the end product.
  • Paper evaluations are skewed toward reporting what funders want to hear. Can individual evaluation interviews uncover more useful information?
  • Funders often ask for evaluations at the end of the fiscal year when grantees are just plain tired out from delivering programs.
  • The Cultural Data Project will be a big help in delivering organizational, financial, and trend data about grantees, but not the stories.
  • We have a culture of not criticizing anyone.
  • Project support and general operating support require very different evaluation methods.

The question left hanging in the air as the session ended circled back to this: How do we talk about the quality of the art?