Collaborative, Iterative, and Responsive

Agile Techniques Transform MAP’s Grantmaking

Lauren Slone and Kevin Clark

How MAP took concrete steps to enact greater racial equity in our grantmaking by incorporating Agile practices in our application processes

Since 1988, the MAP Fund’s core grant program has provided cash grants to contemporary performance projects that meet the following criteria: new, original work by vocational artists that demonstrates a spirit of deep inquiry in form, content, or both, and that — in particular — challenges inherited notions of social and cultural hierarchy. These criteria help MAP identify emerging models and fresh themes that seed the field for growth, undertaken by artists who are vastly underrepresented by traditional arts philanthropy.

Between 2014 and 2016, MAP’s Executive Director Moira Brennan, Program Manager Lauren Slone, and Program Associate Kim Savarino began a deep examination of — and renewed commitment to — one of the program’s foundational priorities: racial equity in arts and culture grantmaking. We moved from the initial impulse, “MAP needs a simple technological upgrade to better enact anti-racist and anti-oppression values in the application itself,” to a much larger redesign of our core grantmaking program. At the same time, we established MAP’s nonprofit status for the first time in nearly thirty years to increase organizational capacity in support of these initiatives.

To do this work, we hired consultant Kevin Clark, who has expertise in philanthropy and software product management. Through an intensive collaboration, we created a vision and implementation strategy based on these foundational criteria:

  • Advance MAP’s commitment to racial equity by reducing friction (eliminating barriers to initiating and/or completing the application) based on user experiences and feedback.
  • Minimize applicants’ labor to the greatest degree possible (even as that choice sometimes meant increasing labor for reviewers, panelists, and staff).
  • Incorporate Agile software practices into our processes to effectively respond in real time to rapidly evolving language, dynamics, and trends in the national performance ecology.
  • Make choices that support MAP staff in taking full control over the technology tools (it had been outsourced until this point), and encourage frequent experimentation, iteration, and risk taking in the systematic improvement of our processes.

Kevin encouraged us to migrate away from the custom-built platform that we had been using since 2006 to a set of commercial tools: Wordpress.com, MailChimp, Zendesk, Slack, and Submittable. This particular constellation of new tools allowed staff to manage routine maintenance and upkeep themselves without an external contractor, make immediate changes to content, and complete the redesign process at far lower costs than those of creating another custom-built system. Further, MAP staff could keep the day-to-day demands of the current grant cycle going while they simultaneously learned the new systems, understood their “creative constraints,” and began creating fresh content.

Instead of solely relying on our own sense of whether the new application was achieving our stated criteria, we enlisted Kevin’s help to incorporate user testing (a key process that drives Agile software development) for each new draft. We regularly brought together small groups of artists, grantees, former applicants, past panelists, and colleague grantmakers to listen to their questions, and, crucially, watch how they interacted with the tools.

As a result, we were able to rewrite language, change sequencing, and improve the experience before ever rolling out the new changes to the larger public. MAP’s “feedback loop,” or “build-measure-learn” cycle, was shortened from a year to a matter of weeks, and the process centered our constituents’ real, observed needs at every stage.

Advancing Racial Equity: Concrete Actions

Institutional Influence

Part of the reason we noticed that our old grantmaking technology was beginning to feel outdated was the accumulation of more evidence that pointed to larger, dynamic shifts in the field of performance. Now an ongoing trend, artists and organizations were demonstrating an infinite number of more complex, “hybrid” producing structures out of necessity to fully — or even close to fully — resource new productions. Through extensive interviews with producers and more attuned observations of reviewers’ and panelists’ reactions to institutions in the evaluation process, we became more aware of how institutional affiliations (or perceived lack thereof) could positively and negatively influence an application’s trajectory through a grant cycle.

Historically, MAP has always received applications from artists who produce their work in a variety of ways. However, our former application was structurally limited in a such a way that it presented false, overly simplistic narratives about how new work is made: either an institution produced the applicant’s work, or the applicant produced the applicant’s work. Artists of color, in particular, expressed deep frustrations about needing to “bend” or “hack” MAP’s application platform to better reflect how their work was actually being produced. In particular, collaborative models, nonhierarchical creation structures, and iterative or process-based projects that resist traditional, European concepts of “premiering” work via white institutional structures encountered difficulty in the application process. Some potential applicants did not apply because they felt they had no other choice than to misrepresent their work.

This issue was further compounded by the fact that MAP requires that a US–based nonprofit organization is affiliated with the project and is able to receive and distribute the grant. For our former application, artists who did not have their own nonprofit or affiliation with an institution needed to establish a formal relationship with a fiscal sponsor in order to submit a letter of inquiry (LOI).

The old application structure meant that we were prioritizing program compliance and staff needs over applicants’ time and labor (an onerous burden for applicants who have less than a 4 percent chance of receiving a grant). Furthermore, the ability to easily arrange fiscal sponsorship is not evenly distributed — the burden this requirement created was not equitable.

We decided, therefore, to experiment with reducing the influence of institutional affiliations in the application and removed the requirement that an applicant had to demonstrate a formal commitment with a fiscal sponsor at the LOI stage (shifting the burden of proof to the full application stage for approximately eighty applicants, each of whom is far more likely to receive a grant). We are still in the process of learning about the effects of this important structural shift.

Work Samples

Another point of friction was our work sample structure. Only those applicants invited to the full application stage had an opportunity to submit samples of their work. Before recent technological advances, it was much more expensive and laborious to film, edit, and send in live performance documentation. So, MAP wanted to ask only those applicants who were advancing to the panel review stage to do that work.

Unlike our old platform, Submittable could receive links to URLs and media passwords and upload large, full-quality files. This meant that artists didn’t necessarily need to spend any time editing or cutting to a certain cue point, for example. Furthermore, with modern cloud-hosted media samples and high-quality filming on personal devices, such as iPhones and tablets, for many applicants it now takes less effort to provide work samples via YouTube or SoundCloud than to draft a competitive project narrative.

Our making cuts primarily based on the written narrative meant that those applicants who work in decolonized ways, whose work is best represented in audio/visual formats, whose work defies written categorizations, for whom English is not their first language, and who do not have access to support for the specialized trade that is grant writing were all facing significant barriers both to initiating an application and to moving forward to the panelist review stage. So, we shortened the written narrative (a thousand words to five hundred words) in the first round of review and created fields in Submittable for up to three five-minute work samples. Many returning applicants (those who have applied in grant cycles prior to Submittable) shared positive feedback that they were “relieved” to see this particular shift in our application process because “it better reflected the ways [they] wanted to represent [their] work.”

Our next steps are better preparing reviewers and panelists to unearth and challenge their cultural, aesthetic, and racial biases in observing more than 2,400 work samples in the first round of review.

Creating Dockets

MAP hires two different cohorts of artists and arts leaders from around the country each year to assess proposals. During the first evaluation phase, reviewers work independently and narrow the applicant pool by 85 to 90 percent. During the second phase, panelists gather live over the course of several days to make final funding recommendations.

In the past, we used the custom-built database to sort applications by discipline (dance, music, or theater), and then assigned each group of projects to reviewers with expertise in that area. Many applications ended up with the same combinations of reviewers, and it was difficult technologically to control for cultural, aesthetic, or racial biases. Assigning projects with this method (created years ago to diminish vast amounts of staff time) meant, for example, that a project that centers Butoh traditions could have been randomly assigned to zero reviewers who have familiarity with those forms, or that an applicant who self-identifies as a person of color could have all white reviewers.

Through our antiracist and anti-oppression training, we began to realize that our old tools for pairing applicants and reviewers were not only creating inequities but also potentially supporting reviewers’ aesthetic biases rather than disrupting or challenging them. To address this, we made these changes:

  • New application fields: We added two questions to the first-round application. Applicants were required to identify which reviewers they felt would best understand their project by selecting up to three boxes: “dance/performance specialists, music/performance specialists, and/or theater/performance specialists.” Applicants also had an optional space to specify any aesthetic frameworks that were informing the project (for example, an applicant might write in “jazz” or “chamber opera”). The combination of these answers helped MAP staff acquire a much faster, clearer snapshot of which reviewers might best serve the proposal based on the applicant’s own input.
  • Creating dockets “by hand”: We wanted to ensure that no two reviewers had the same docket and that pairings would be created based on as close a match as possible between what the applicant identified and the reviewers’ areas of expertise. Keeping the dockets dissimilar also reduces the mathematical impact of differences in scoring behavior.

Because Submittable would not support this level of granular customization specific to MAP’s needs, we created our own tools. We exported the project data from Submittable into a massive spreadsheet, which contained all of the reviewers’ information. We manually assigned each project to the appropriate number of reviewers. Finally, we transcribed the review assignments into Submittable to create the online dockets. To be clear, the process was not easy (more than 4,800 manual pairings), but the tradeoff resulted in a more artist-centered and equitable proposal review.

We have more to learn about how to streamline this process and about the impact of the staff’s influence in making the assignments.

More Nuanced Score Assessment

Though we provide an online Slack channel to reviewers to enable them to share questions and to streamline our support to them, the independent review process does not require a consensus discussion about scoring. MAP uses a scale of 1 to 4 (4 indicating that the project absolutely aligns with MAP’s funding priorities, and 1 meaning that a project does not align). Reviewers also have unrestricted voting (meaning they can give as many 4s, 3s, 2s, and 1s as makes sense to them within their docket).

As a result, merely averaging the scores we are provided would distort the reviewers’ intent. When one reviewer scores half of their projects as 4s, and another only scores one or two projects that high, it is extremely unlikely that one docket was that much more deserving than another. It is far more likely that the panelists’ different understandings of what a 4 means are driving the disparity.

So we tried combining our scores in a different way, by applying what we call a “behavioral adjustment,” created by our consultant Kevin Clark (who initially built the tool for New Music USA’s project grants) to solve this exact problem. Instead of averaging raw scores, we take a probabilistic approach that allows us to factor in the way that each reviewer used the scale. We replace each score with the odds that the reviewer would, based on their scoring, give a project a lower score. Where previously one reviewer could skew the final ranking by “gaming” the number of 4s or the kinds of projects they were interested in advancing within their docket, now no single reviewer has the power to dramatically change outcomes. This algorithm can be implemented simply in a spreadsheet and allows us to more confidently provide a fair review of nearly a thousand submissions each round through an entirely remote first-stage process.

Ongoing Evolutions

MAP’s process significantly accelerated our achieving control over our tools and, therefore, our abilities to better embody antiracist and anti-oppression principles in all aspects of our work. Of course, some of our experiments didn’t work like we had hoped, and our constituents were gracious to let us know about problems. We are trying new iterations in the 2018 cycle in response to that feedback, and that will be true in 2019, 2020, and so on.

In closing, it is important to share that being open to making such frequent changes means welcoming uncertainty and experimentation into our process. This can be scary, because we also have added accountability and the stakes are high to get it right. We found that Agile software development techniques — focusing on user research and frequent, small changes — have made it easier to take those risks and to strive constantly to make our process more equitable.

As always, we welcome more dialogue with any colleagues who have questions about this work or who are interested in sharing ideas and accountability, and we encourage you to learn more about our process and the artists we fund at mapfundblog.org.