I take a back seat to no one in my admiration of the Nonprofit Finance Fund, so it pains me to dissent from Rebecca Thomas’s thoughtful article, "New Ways to Rate Charities Don’t Help Donors Make Smart Bets," published in the Chronicle of Philanthropy, but dissent I must. The simple answer to her important questions – “So why the rush to rate and rank? Why not provide information and let donors decide?” – is that donors aren’t willing to wade through unprocessed information about nonprofits to make more informed decisions about which charities to support. Given the choice between making steady improvements in sites like Charity Navigator and continuing to leave donors without any meaningful guidance about which nonprofits are most effective, I’ll take better rankings every time.
First, a disclosure: I've been both a consultant to Charity Navigator and a member of its Advisory Panel that's helping CN enhance its rating methodology.
Second, the disconnection between performance and funding is one of the most significant problems facing the nonprofit sector. We urge charities to undertake the difficult and expensive work of measuring and managing their performance, knowing all the while that doing so is likely to have little or no effect on donor support. Why? Because, as Hope Consulting’s research shows, donors don’t look for or use performance information. Why? Because it’s not available in a form that’s useful or convenient to them. Lecturing donors about how they “should” evaluate charities, as so many articles (particularly at this time of year) do, is no way to help donors become more thoughtful philanthropists.
In a recent New York Times column, Nicholas Kristof offered what seemed like sensible advice for holiday charitable giving: “donations could accomplish far more if people thought through their philanthropy, did more research, and made fewer, bigger contributions instead of many small ones that are expensive to handle.” He’s right, of course, but his sound advice is virtually impossible to follow. For most donors, there’s no simple or direct way to find which charities do the most good. As a result, charities that don’t accomplish measurable objectives can still attract funding from uninformed consumers simply by telling engaging anecdotes.
I don’t agree that “it is far from clear that the new systems are any better than the ones they seek to replace.” Clearly, Charity Navigator’s evolving methodology is a major advancement in thoughtful analysis. Another pioneering site, Philanthropedia, is presenting donors with a tremendously helpful tool. And I would argue that sites like these are doing exactly what Ms. Thomas suggests they do: “provide donors with a truly meaningful blend of information about an organization’s leadership, direction, revenue model, capital needs, and program results.”
Critics of new rating sites fail to recognize the importance of providing mass-market information tools. Millions of donors are donating billions of dollars to more than 1.5 million nonprofit organizations with almost no idea of how well the charities are run or what they accomplish. In such a crowded market, highly-effective nonprofits are not rewarded for strong performance because, for all practical purposes, donors have no way to find such organizations. The haystack is too big and the needles are too few.
Respectfully, it is no answer to say that donors should “take the time to do a comprehensive analysis of the context, risk, and opportunities facing each nonprofit.” Perhaps they should, but they never have and they never will. Nor can we accept that “a sophisticated consumer will look beyond a simple rating and ask data-driven questions about a broad range of ingredients that lead to success in achieving an organization’s mission.” There are no such consumers. Foundation program officers and the skilled professionals at places like GiveWell, SeaChange and New Profit certainly go to those lengths, but ordinary donors who provide 75% of the total donations in this country do not.
Ms. Thomas makes legitimate points about selection bias and disclosing expert affiliations. But I think she goes too far in claiming that new standards of “cost effectiveness” and “financial sustainability” are “arbitrary, inconsistent and misleading.” They’re not perfect and they’re not as rigorous as full-blown due diligence, but they’re vast improvements over the information donors have now. The added value of the new ratings far outweighs their shortcomings.
Ms. Thomas urges Chronicle readers “to be mindful of the many highly effective nonprofit groups we may overlook in the process.” The fact is that existing information about nonprofits systematically overlooks virtually all highly-effective charities. For the first time, the emerging wave of new rating sites is likely to make it much easier for donors to find and fund charities that they think produce the most social impact. If so, performance-focused nonprofits will finally have genuine financial incentives to publish meaningful and reliable information about their actual accomplishments. And donors can decide for themselves whether they wish to continue to support charities that are unable or unwilling to do so.
Thanks for the very thoughtful response. The reasons for the disconnect between funding and performance are myriad and complex. There are no easy ways to connect philanthropy to impact, as is apparent by how many smart people have been thinking about the issue for a really long time. I agree with your premise that money often doesn’t flow to the most highly effective nonprofits and that many donors won’t or can’t do the type of analysis required to ascertain social impact.
ReplyDeleteBetter analytical tools and the aggregation of financial and programmatic data can certainly help donors make more informed giving decisions. But I don’t think the answer to is to pick winners and losers based on new measures or ratios that may have unproven correlation to performance and that often don’t account for business model variation. Likewise, I fear we too often assume that organizations that provide more information are necessarily imparting more evidence of success. More is often just more, not better.
There are many interesting experiments out there. I look forward to seeing whether they ultimately activate higher impact philanthropy.
I have to agree that few donors want to put in the time and effort to evaluate charities themselves. I developed the online tool The Charity Rater to walk donors through the steps of evaluating a charity, unfortunately few people have been willing to commit the 10 - 20 minutes it takes to go through the process themselves.
ReplyDeleteA hearty "hear, hear!" to your comments here, Steve -- could not agree more.
ReplyDeleteI would add that this train is heading down the tracks and there is little point in trying to derail it now -- "vast improvements over the information donors have now" is increasingly all that donors need to hear in order to take these new tools seriously. Arguing that donors should wait until perfectly fair and comprehensive tools are available, with their only alternatives being either to keep flying blind or to become personal experts in all the intricacies of evaluating the effectiveness of non-profits -- that's just a nonstarter now in the real world, particularly with the generations of adults younger than us Boomers.
Personally as a professional lifer in this wonderful sector I'm enormously glad of that shift in attitude/assumption, am only frustrated that it's taken so long to start gaining traction. But whether you love it or hate it is at this point immaterial -- it's here.