Monday, October 13, 2014

Facilitating Evaluation, Imperfectly

1. A friend, an Executive Director of a non-profit organization, an educational organization, came to me asking to talk through a challenge she was grappling with at work. She felt like her financial supporters really want to know – they keep asking – what the impact of her organization's work is. She would not mind knowing her work's influence either, but she trusts that the particular intervention that she creates, the educational experience her organization facilitates, is well-rooted in sound educational theory and is effective. At the same time, she is fairly certain that it is impossible to prove impact, and she isn't even sure how much impact she can expect to come from this one intervention. The experiences her organization facilitates are quick. They are integrated into the daily lives of the families she reaches out to. The experience is hard to isolate from other interactions the families might have.

So, she asked, how does she pursue evaluation?

2. A potential client came to me looking for an evaluation of a new program. The program, the client admitted with frustration, had been launched by a third party with none of the smart practices that this client had learned are associated with evaluation. There was no logic model, no set of established goals, no uniform expectations of what this program was supposed to be doing. I spoke a little bit about developmental evaluation, about evaluation techniques used with program innovations, about doing evaluation research with less certainty than we often have. I heard her frown over the phone. “But it's not a program innovation,” she explained, “It's not innovative; it's just an idea that some practitioners had and ran with.”

So, she asked, how does she pursue evaluation?

3. A potential client came to me because he had funds in his organization's budget for evaluation. But, he acknowledged, he had no real interest in evaluation. It seemed to him like busywork, meant only for financial supporters, with surveys and questions that do not capture the essence of what he does.

How does he pursue evaluation?

These three stories are not exactly the same. In each case, though, the practitioner seemed to be coming to me with the idea that they needed to engage in something called evaluation and that evaluation would produce an answer to the question, “Does this program work?” They seemed to see this as a yes/no question, related to long-term life change, with little room for the flexibility and fluidity that characterize their real work and lives.

In the most robust circumstances, with plenty of clients and financial resources both, where different environments can be accessed and a control group isolated, such definitive questions about evaluation might be answered.

Even while such evaluations sometimes take place in the social sector, these circumstances frequently do not exist in Jewish institutional life. Perhaps more money would help, but it might not be a question only of resources: Our programs are (relatively) small in scope and not able to support the breadth required for even quasi-experimental research designs. That is, we lack the size and perspective necessary to create the random assignments for experiments as well as the real control mechanisms needed for almost-experiments, used when random assignments are not possible. (This is why the evaluation research on Taglit: Birthright Israel is so valuable, because the size of the population and the nature of the process – that there are those who participate and those who applied – give the program heft and make it suitable for something close to experimental work.) Moreover, the complexity involved in religion and religious participation is deep. Definitive evaluation needs to isolate the impact of an intervention. Can we imagine asking a group of families, not connected to Jewish institutional life, to participate in evaluation research as a control group for, say, synagogue members? How would we pick the families? Secure their participation? What if they changed their mind during the experiment and became involved in synagogue life? The formation and exercise of ethno-religious identity make separating the role of one specific program from the interactions that occur from a host of programs

Still, this question (“Does the program work?”), this idea that evaluation can answer such black and white questions, seems prevalent. We seem to have sold evaluation as unnuanced. Or, perhaps we have successfully conveyed that true evaluation has strict rules. As a result, practitioners wonder if evaluation can be useful to them, since it seems not to fit their circumstances.

The point is, it rarely fits any of our circumstances in Jewish life. So, we do something else:

We borrow from the field of evaluation research in order to answer our specific questions.

I began my conversation with each of these practitioners, in each of the above cases, asking, What do you really want to know? When we open up that question – when we ask, what do we really want to know? - we reveal a world of answerable questions.

In cases #1 and #3, above, when we put their perceptions of their financial supporters aside, we were able to isolate questions that were interesting to the practitioners themselves. I asked, “What really keeps you up at night about your programs? Or, what, if your programs could do, would you consider a wild success?” Slowly, we began to identify who the successful “graduate” of their programs might be, what they would know, feel, and be doing in their lives. We looked at the relationship between the practitioners' programs and these outcomes, and then we tried to design research that would answer these questions. We used qualitative research techniques: One organization would call program users randomly, consistently asking them the same three questions, and another organization would hold living room focus groups, keeping the intimate spirit of their organization intact, leading textured conversations that would both answer the practitioners' questions about their work and offer, possibly, a meaningful reflective experience for the participants. We began to switch the main question we were answering from “Does this program work?” to “How does this program work?” and even, “How can it work more effectively?” giving to practitioners useful understanding of their program theory that they could not develop elsewhere. Evaluation came to represent not something unnecessary and unanswerable, but a source of information that would help them do what they do better.

As part of these conversations, we did turn toward what their financial supporters might want to know. Through conversations with them and through their own reflection, the practitioners determined that their financial supporters' questions were, in fact, much more nuanced than simply, “Does this work?” And this was particularly important in case #2, above. In this instance, the financial supporters fully appreciated the “ready, fire!” nature of the program's launch. For a variety of reasons, they recognized that they would not receive through evaluation a summative judgment on the worthwhileness of this program. They recognized the limitations of experimental evaluation research in Jewish institutional life. And, decisions to support something financially are, themselves, nuanced and complex, involving a range of factors. They wanted not a final recommendation but instead insight that they could use to make a decision. There was a series of questions this research could answer that would be helpful to them, around the ways that the program influenced current participants, who participated and why, and how and if the program could be strengthened in the future. Evaluation research could shed light on these questions, making the financial supporters' decisions easier, helping practitioners and financial supporters both to understand their program more deeply.

Evaluation offers a way of looking at social interventions, an examination of these interventions in order to address a potential need in participants' lives, the exploration of a set of questions about intended outcomes as compared to the experience of the participants.

Rather than being overwhelmed by the rigor that evaluation technically requires, we can borrow its strengths. For example:

1. Create a logic model. Even for a program that has been facilitated for some time, documenting expected program outcomes, and the relationship between program strategy and outcomes, can be helpful.

2. Create an organizational theory of change. Similarly, organizations often facilitate interventions - ie do everyday business – with an air of immediacy. Once a program or start-up organization is established, take a step back to reflect, documenting intended outcomes and linking them to strategy.

3. Gather quick feedback through feedback forms. After every program, ask participants (through a written survey, for example, on chairs, or through individual surveys sent as soon as the program ends, or using tablets/phones at the event, or through text messages...) a quick series of questions, perhaps no more than three. Identify the most helpful indicator(s) of success or effectiveness, such as: Would you come back? Would you recommend this program to someone else? Did you think about something new tonight? (Make sure you ask questions that are usable and valuable to you!)

4. Create opportunities for long-time participants to reflect on their experience with your organization. Learn about the nuance of your influence through these conversations; use what you learn to expand your success.

5. Study targeted questions that will allow you to strengthen your work, such as who participates, or how they participate, or when in their lives they participate.

6. Study influence in a limited, specific way. Compare influence on a certain group of participants to your hoped-for outcomes. Reflect on why there might be a gap between the two.

Through evaluation research – or evaluative study, as I often call it – we can learn to ask a set of specific questions about the experience of the participants at this particular moment in time. We can develop a set of reflective practices that involves the systematic exploration of participants' experience, through participants' voices, not only from practitioners. We can learn about a program, helping to focus it and expand or deepen its potential influence, sharpening our strategy, even while we lack a definitive understanding of its potential impact on current and future participants.

True experimental evaluation research may require circumstances that do not reflect the real experience of most Jewish institutional leaders. But it can still be immensely useful, particularly when we can be clear about what we learn and about what we still wonder, even with the information we have.

No comments:

Post a Comment