Whenever I teach evaluation, I warn the group that I carry pompoms. I self-identify as a program evaluation cheerleader, although early on I struggled with understanding the source of others’ resistance to evaluation. It seems people often equate program—or impact—evaluation with their trepidations about their own performance evaluation.
Two completely different things, right? Even if one of the variables in the program evaluation design is “employee,” it isn’t to determine whether employees are doing their jobs, but rather, to see if different employees have different results and, if so, what are those with “better” results doing differently that could be responsible for the “better” results which all could then emulate.
Two studies related to this topic crossed my computer screen within days of one another. One gave the best explanation for individual employee’s dislike of program evaluation; the other shows the tensions about program evaluation that seems to reside with so many. Together they create some interesting thinking.
Julia Morley, a lecturer at the London School of Economics, looked at how employees of social services organizations viewed impact evaluation. Using just three nonprofits—one in New York, two in London—she interviewed 93 staff members. After reading their job descriptions, Morley asked them how having to report the impact of their programs made them feel. Such an interesting question for a process that is about measuring whether or not the needle moved.
What is even more interesting is that Morley discovered that people were uncomfortable with the evaluation, but not for the reason I noted above. They were uncomfortable because they didn’t like how it described their job, their role; it didn’t paint the picture of how they see their job—as helping people. Morley believes it is the statistical and matter-of-fact language of storytelling using data from impact evaluation that is in stark contrast with the emotional and empathetic storytelling that employees use that produces the discomfort. It is the dissonance between how employees view their work and how the meeting (or not) of goals makes their work seem to others that causes the problem.
Oracle/Netsuite asked 353 nonprofit executives about their evaluation practices. These execs represented organizations with missions from across the sector, with just over 1/3 having budgets under $1 million. In total, almost 2/3 had budgets under $5 million. Financial stability was their number one concern, and individual donors were the number one source of income. Minimally, 2/3 of respondents recognize the importance of measuring different things while not reflecting that in practice.
For example, 91% of respondents believe that measuring program efficiency is important, while only 60% actually measure it. Seventy-two percent say it is important to measure fundraising efficiency, but only half actually do it, and this despite the fact that the number one concern for everyone was financial stability. If you are concerned about financial stability, don’t you think you ought to assess your fundraising efforts? Where there is consistency in what they see as important and following through in doing is assessing finances: 75% say it is important to measure revenue growth year over year, while 79% actually do it; 80% think it important to measure program expense growth year over year and 77% do it.
When it came to measuring program outcomes, only 71% said this was important, although 75% said they do it, but only 20% say they are very effective at showing outcomes. Further, 69% of these executives said program evaluation rewards large and well-resourced nonprofits, and 60% said it “minimized the complexity of social issues.”
I’d have to disagree with both of these statements. First, good evaluations are those designed to fit the capacity—financial, human, time, etc.—of a specific organization. Great evaluations can be designed to be big and complex, or very simple and easy to implement, analyze and learn. Evaluation is not just for the well-situated. As for minimizing the complexity of issues, it does the exact opposite: it takes apart a complex issues and breaks it into its parts, as so often it is only the parts that can be evaluated, while the evaluation of the whole becomes, to a great extent the sum of the evaluation of the pieces.
By having to take the step backs from the end goal and asking what has to happen first, then second, third, etc. in order to get to the end goal, the complexity is broken down and the problem and the solutions easier to understand.
There is a clear lack of enthusiasm for impact evaluation at both the organizational (leadership) and employee level. One of the first things I do when I bring out my evaluation pompoms is to talk about why evaluation is important: what it can do for an organization, when done well and right; how that information can be used within the organization for learning, grown and future planning, and outside that organization for media and donor relations, community building and more. Evaluation information can tell powerful stories that reveal the compassion and caring of employees while also demonstrating how the needle moves. These are not at odds, but rather one strengthens the other. Until leadership understands the reality of evaluation, employees will continue to feel uncomfortable about this all important tool.