I am not a fan of the rating systems on which many charity watchdog groups have built their reputations, which means I’m not a fan of those kinds of watchdog groups. In 2014, the big three (Charity Navigator, BBB Wise Giving Alliance and Guidestar) published their joint letter to the “donors of America” decrying the “overhead myth” of using overhead costs as a measure of nonprofit effectiveness. However, they failed to acknowledge their role as a driving force behind popularizing overhead ratios as a measure of a nonprofits’ goodness.
Six years later, Charity Navigator announces that they have arrived: they have purchased the firm ImpactMatters which, according to the news release, allows them to “rate organizations based on impact–the amount of good achieved per dollar spent.”
The last time I checked, that was not the definition of impact. As a long-time practitioner of program impact evaluation, I have been under the impression that impact was measured by seeing how well a program delivered on the outcomes it promised. And while cost-effectiveness in delivering that impact is important, if you haven’t measured the success or failure of delivering on that impact first, you cannot get at cost-effectiveness.
Here’s what Charity Navigator has to say about the “beacon” (its descriptor, not mine) it labels “Impact & Results.” To measure “…how well a nonprofit delivers on its mission… we estimate the actual impact a nonprofit has on the lives of those it serves, and determine whether it is making good use of donor resources to achieve that impact.”
Its website goes on to say that it uses a “thorough search of its public materials” to make that estimate of impact “of a substantial portion of its programs.”
Shall I count the problems with this methodology that folks will tout and those donors who believe watchdog groups do good will believe?
One: it is using public materials. I’ve yet to see public materials do a good and thorough job of explaining the methodology an organization uses to assess impact, the data collected and the conclusions reached. Sadly, too many of those nonprofits that are trying to do good impact evaluation don’t know how to use the output of that work well internally, let alone externally.
Two: it is basing the ultimate measure on which it is going to declare the whole organization a financially efficient organization, or not, on a “substantial portion of its programs.” So, it is going to label the whole based on questionable information about only some.
Three: one of the beauties of good impact evaluation is that it allows us to look dig deeper and look beyond merely knowing whether the program works or doesn’t work. It allows us to see if the program works for this subset of clients or that; to see if one piece of the program is more successful than others, and more.
In other words, it allows us to learn and make adjustments in order to make the program stronger and more successful in the long term. Thus, good evaluation doesn’t reveal a yes or no answer; it reveals shades of grey.
Four: all of this will be an estimate, but the declaration of how good the organization is on this “beacon” will be definitive.
Five: Charity Navigator is going to use this very fuzzy datapoint to determine the organization’s cost efficiency. Not the kind of beacon by which I’d want to steer a ship—nor decide where to give my money.
There are, however, some things that make me think I’m ranting for nothing. First, while Charity Navigator has greatly increased the number of nonprofits it evaluates—going from 9,000 to 160,000—that isn’t even a large drop in the bucket. Second, those organizations that it does evaluate are the larger organizations, thereby missing the nonprofits doing the vast amount of the work of the sector. In other words, most nonprofits don’t make the watchdog’s cut. This might be one of the few times where being invisible helps. And third, maybe it just doesn’t matter what the watchdog groups say or think because most folks don’t pay them any heed.
According to new research just released by Grey Matter Research and Harmon Research, 36 million donors used a watchdog’s information to make a giving decision last year. But that’s just about 1/3 of all donors. Just under half (48%) of those surveyed were completely unaware of any charity watchdog group, while the remainder were aware of at least one of the eight watchdog groups used in this research. Of those, 21% said they always or usually use information from a watchdog group in making a giving decision. This was markedly so with younger, non-white, religious, and high-income donors. In fact, millennials and GenZs are four times more likely to rely on information from a watchdog group in deciding where to give than are older donors, suggesting a potentially growing swathe of influence for watchdog groups as they peddle inappropriate metrics for assessing the true goodness of a nonprofit.
Some watchdog groups were once very successful in misleading the public to think that the best way to assess the goodness of a nonprofit is a financially driven metric: overhead ratio. That thought pattern is sealed into folks’ brains and seems unwilling to leave. It is too late to take it back and the harm it has caused, and continues to cause, is irreparable. Without a doubt, understanding how a nonprofit uses its money is incredibly important. An arbitrary benchmark, however, that is applied blindly across every organization, regardless of where an organization is in its lifecycle, what its strategic priorities are, what unique and timely circumstances it might be facing, is not. Charity Navigator is, once again, putting faulty thinking and inappropriate measures out into the stratosphere for all to absorb.
Will nonprofits allow a watchdog group to do this to them yet again?