Two Viewpoints on Evaluation
What can evaluation accomplish for grantmakers and grantees? What roles should each play in the design and execution of the evaluation process? Recent briefings from The Conservation Company and the Neighborhood Funders Group examine these questions from different vantage points.
Evaluation: The Good News for Funders
July 2003, 30 pages. Neighborhood Funders Group, One Dupont Circle NW, Suite 700, Washington, DC, 20036, 202-833-4690, www.nfg.org/publications/evaluation.pdf. Hardcopies available for $10.
Learning As We Go: Making Evaluation Work for Everyone
June 2003, 12 pages. The Conservation Company, 50 East 42nd Street, 19th floor, New York, NY, 10017, 212-949-0090, www.tccgrp.com/know_brief_learning.html
In Evaluation: The Good News for Funders, Andrew Mott makes the case for participatory evaluation practices. Borrowing from Sherry Arnstein's landmark "Ladder of Citizen Participation" work from 1969, Mott explores a continuum of tactics grantmakers can use to engage grantees and project beneficiaries in evaluation. He also classifies funders' evaluation needs into three over-arching categories: to document grantee performance, to assess the impact of a grant project in a way that informs future funding decisions, and to build grantee capacity. Mott asserts that all three goals, while necessitating different methods, are equally worthwhile and supported by a participatory evaluation approach. However, each goal creates different expectations and grantor-grantee dynamics. Such dynamics may exacerbate an inherent power imbalance between funders and nonprofit organizations. Emphasizing a "first do no harm" philosophy, Mott recommends managing evaluation practices through candor, clarity, and a realistic understanding of the divergent assessment motivations that funders and their grantees may hold.
In Learning As We Go: Making Evaluation Work for Everyone, The Conservation Company (a consultant firm serving both funders and nonprofit clients) considers program assessment from the perspective of "evaluative learning." Like the methods outlined by Mott, evaluative learning engages funders and grantees in all stages of the evaluation process. The cornerstone of evaluative learning, however, is not information needs of the grantmaker but the learning capacity of the grantee organization. Author Peter York organizes his discussion around eight key dimensions of evaluation: purpose of the evaluation, audience, choice of evaluator, design of methods, data collected, reporting, interpretation of results, and frequency of assessment. Degrees of funder involvement, grantee involvement, and organizational learning vary within each dimension. Using this framework, York suggests that traditional evaluation (infrequent, funder-driven, and conducted by an objective outside evaluator) offers limited learning opportunities for grantees. In contrast, maximal learning occurs when the evaluation design aligns with grantees' natural planning, program design, and governance processes and strikes a balance between the organizational priorities of grantees and the assessment needs of funders. The briefing includes several logic models, checklists, and diagrams illustrating evaluative learning practices.
Although the Conservation Company piece cites work conducted for the Massachusetts Cultural Council, neither of these briefings discusses arts evaluation in any depth. Both resources, however, offer practical evaluation tips, how-to advice on navigating grantor-grantee relationships, and conceptual frameworks that readily apply to philanthropy in the cultural sector.