This is a summary. For a full explanation of each point and associated sources, click the relevant heading or subheading to be taken to the body of the report.
With so many charities out there (1.5 million in the US alone!), and countless appeals for support coming our way every day, the world of philanthropy can be difficult to navigate. In this report we’ll explain why we think getting this navigation right is a crucial part of donating to charity, and outline our methodology for finding impactful, effective and transparent charitable opportunities.
In its simplest form, our take on charity can be summarised by three core ideas:
Make a real difference: When thinking about charity evaluation, what often comes to mind is big financial scandals. In fact, the UK Charity Commission's research shows that irresponsible spending and fundraising practices tops the list of the public's concerns around charity. In reality though, there are strong safeguards against fraud in the sector, and the threat posed is over-exaggerated due to the high profile of these relatively few incidents. Something more difficult, and arguably more important, is figuring out which charities are outstandingly good.
Doing this is a game-changer, as evidence shows that some charities have tens, and even hundreds, of times more positive impact than others do (for the same amount of funding). In practice, this means that choosing an effective charity is equivalent to multiplying your amount donated to a less effective charity. Why? Studies show that implementing effective social programs is harder than most people think. Which is why a large number of charities actually have very little or no impact at all - despite the best intentions. And some even do harm, through unintended consequences. Founders Pledge are here to find the very best charities out there, so that you can feel confident that your donation is doing as much good as possible.
Follow the data: You’re probably used to making business decisions by digging into the data. We do the same with charities. We look for two main pieces of information. First, we look for evidence that programs improve outcomes, not just outputs. For example, we don’t just want to know how many books have been distributed; we want to know how much more students are learning. Second, we look at impact evaluations: studies that track the causal impact of charities’ work. For example, we don’t just want to know whether student are learning more; we want to know whether this increase was brought about by the charities work.
Change the game: Charities spend a significant portion of their budgets competing for future funding. This means that whichever qualities donors ask for, charities will compete on. If donors make their choices based on glossy marketing brochures, charities will have to spend a significant portion of their budget on that. But if donors ask for impact evaluations and evidence of effectiveness, charities will allocate their resources there. By giving smart, the Founders Pledge community can have outsized impact; helping to encourage a industry wide best practice, and influencing how charities operate now, and in the future.
Our framework for finding amazing donation opportunities is based on best practice in impact assessment, drawing on the latest academic standards in social sciences. We evaluate charity on three levels:
1. Focus area
There are countless important and urgent causes in the world, often making us feel helpless when it comes to choosing one over another. This is why, if you don’t already have one particular focus area you’ve decided on, we want to help you identify the area where your donation is most likely to make a big difference. To do this, we start by understanding your core values: for example, how much weight do you give to saving a life, as opposed to improving quality of life? How do you weigh the suffering of animals against the suffering of humans?
The second step consists of selecting the area where your donation is most likely to have a big impact, given your individual values. To do this, we look at three factors:
Scale: how many are affected by the problem? How badly does the problem affect them?
Tractability: relative to how large the problem is, how easy is it to improve it? Can we realistically make meaningful progress at this time?
Neglectedness: how much attention is given to the problem, and how many resources are currently spent to solve it? In other words, is it a ‘crowded’ area? By the law of diminishing returns, your donation is likely to have more impact on a challenge where fewer people are already helping relative to the scale of that challenge.
In every cause area there are many different proposed strategies to help. For instance, if you are interested in improving education, you could consider supporting a program to; provide learning materials, train teachers, give scholarships etc. In most cases, some strategies are much more successful than others. To find the ones that are, we look for the following qualities:
Supported by robust evidence: this means we look for interventions that have been studied, and we look for those studies to be methodologically sound and reliable. For instance: are we confident that any impact measured has actually been caused by the specific intervention, and not by a separate, unknown factor? Do the studies avoid possible biases? Do they show consistent effects?
Effective: does the data show that the intervention has positive outcomes, and achieves its goals?
Cost-effective: how large is the impact per dollar spent? In other words, if you donate a set amount of money, how much good does it do compared to spending the same amount on another strategy? Quantifying impact is much easier for certain types of interventions- such as vaccines, than for others- such as policy advocacy campaigns. However, in most cases, estimations can be made using a variety of methods.
There are often several organisations running a given type of intervention. Our goal is to find the ones that do so most effectively. The specific questions we discuss with a charity depends on the area they’re working in, but the most essential things consider are:
- Intervention implemented: does the charity implement a promising intervention?
- Organisational strength: does the charity have a strong internal structure?
- Room for funding: do they have concrete plans for growth? Would they be able to use further funding productively?
- Transparency: are they transparent about their activities and finances? Have they shown willingness to adapt or change their methods if a project did not work as hoped?
- Track record: if they have been running for a while, have they had any success?
Evidence and risk
Some donors are most comfortable supporting organisations where we can confidently and accurately estimate how much good their donation will do. Others prefer to support interventions that are considered high-risk and high-return. These are usually interventions where chance of success is lower, but where a successful outcome would have outsized impact. Some donors want to make a mix of low- and high-risk donations, similar to how one may think about a ‘diversified investment portfolio’.
We are committed to respecting individual values, and think that an evidence-based approach can be taken no matter where your preferences lie, and how you feel about risk taking.
- How we think about charity
- Make the biggest difference
- Follow the data
- Change the game
- Research methodology
With so many charities out there, it’s difficult to know how to make the right call. But understanding their differences will likely enable you to do a lot more good with your philanthropy. Our take on charity is based on three key ideas: make the biggest difference, follow the data, change the game.
When thinking about charity evaluation, what often comes to mind is financial scandals and fraud. In fact, the UK Charity Commission's research shows that irresponsible spending and fundraising practices tops the list of the public's concerns around charity. In reality though, there are strong safeguards against fraud in the sector, and the threat posed is over-exaggerated due to the high profile of these relatively few incidents. Something more difficult, and arguably more important, is figuring out which charities are outstandingly good.
Let’s say you only had four charities to choose from. Charity A could help 200 people with your donation, Charity B could help 100, Charity C would not help anyone, and Charity D would harm 10 people. Where would you donate?
It’s a no-brainer: you’d pick charity A. You’d never want to give to Charity D, and donating to Charity C would be a waste. Donating to Charity B would be good, but if you choose Charity A the same money would do twice as much good - it’d be like doubling your donation to Charity B.
In the real world, things get much more complex than this. But the more research is done on charity impact, the more we’re seeing that the general principle holds; some charities are vastly more effective than others. And the fact that these questions are complex, does not mean we can afford to ignore them. By applying critical thinking and evidence when we deal with social problems, we can have an outsized impact on the world’s biggest challenges.
Before we explore what makes a charity really great in our eyes, there are three facts about the charity world we need to take onboard:
Imagine you could provide communities with clean water, give kids a new game to play with, and free up time for women - all at a stroke. Sounds like a brilliant solution. The PlayPump promised to do exactly that. The PlayPump was invented to collect underground water, and work as a modified merry-go-round. Water would be collected by children playing with the merry-go-round, so women would no longer have to walk miles to use a hand or windmill-powered pump. The PlayPump received millions in funding, from donors including the US government, the Co-op, and Jay-Z.
It soon became apparent that PlayPumps were not such a good idea: the merry-go-round never spun freely, and in order to pump water out of wells, children had to apply constant force. So the merry-go-round was not fun, it was constant hard work. It could also be dangerous, making children sick or injured. Women had to take up the work from children, and many found it a worse option than previous alternatives: both demeaning and exhausting. The lesson of this story was that even if an idea makes intuitive sense, has all the best intentions, or has been endorsed by major philanthropic players, we cannot automatically assume it is actually going to be a successful solution. We need evidence.
The Coalition for Evidence-Based Policy is an NGO that aims to increase government effectiveness through the use of rigorous evidence about what works. In a 2013 report, they found that 90% of the interventions evaluated by studies commissioned by the Institute of Educational Sciences since 2002 had weak or no positive effects. They also reported that the Department of Labor evaluated 13 interventions since 1992, and found that about 75% had weak or no positive effects. In 2010, the American think-tank Brookings Institute, reported that 10 federal social programs were evaluated by randomised control trials, which are often considered the “gold standard” of impact evaluations (more on this below). Nine of these evaluations found weak or no positive effects.
Working out the precise numbers is very challenging. However, what we do know alerts us to the fact that uncovering fraud is not our biggest problem when it comes to the charity sector: finding out what works and what doesn’t is.
What’s more, without concrete evidence, it’s extremely difficult to predict which programs will be successful, and which will not, based on intuition or personal experience. To highlight this, the nonprofit organisation 80,000 Hours ran an experiment to test our ability to predict program effectiveness: they collected a sample of 10 well-researched programs, and asked more than 100 participants to guess which ones had a positive effect, which had no effect, and which were harmful. On average, participants got 4 out of 10, which is only a little bit more than what you’d get by picking at random. You can try the experiment here.
To summarise, figuring out how to help people in a substantial way is much more difficult than it seems, and you can’t rely on intuition when it comes to what works and what doesn’t. But as we’ll demonstrate below, it is absolutely possible. Imagine if all the funds spent on the programs mentioned above, that had little or negative effects, were spent on projects with verifiable impact. There would be a lot less suffering in the world.
Knowing whether or not a program works at all is important, but it’s only half of the story: we also want to know how well it works. In order to do this we estimate their cost-effectiveness.
When it comes to charities, cost-effectiveness essentially means ‘how much good is achieved per dollar spent?’. ‘Good’ can be measured by various metrics depending on the intervention: for health interventions it can be lives saved or years of healthy life gained; for education interventions it can be improvements in learning; for climate change interventions it can be Co2 emissions averted, and so on.
Some of these metrics are more straightforward to measure than others. For example; for many years school attendance was the standard metric for evaluating interventions in education. However, new knowledge has revealed that focus on learning outcomes, such as student results on standardized maths tests, can better indicate how much the intervention will ultimately help the students in their everyday life and future.
Let’s look at an example of the variation in cost-effectiveness in health interventions: the Disease Control Priorities Project (DCPP) is a project which aims to determine priorities in health around the world, focusing on low-income countries. The graph below shows their estimate for the cost-effectiveness of different programs related to HIV: treating Kaposi’s sarcoma (a type of cancer which mostly affects people with HIV), providing antiretroviral therapy, distributing and promoting use of condoms, and peer education for high-risk groups. They use a measure called DALYs (disability-adjusted-life-years): a DALY can be thought of as one lost year of healthy life, and it’s used to measure the “burden” of each disease.1
As you can see above, the differences are very large: spending $1000 treating Kaposi’s sarcoma averts roughly 0.02 DALYs, while the same amount spent on peer education can avert roughly 27 DALYs. The first intervention achieves 1400 times more years of healthy life than the latter, given the same size donation.
Choosing an outstanding charity makes a big difference to how much good is achieved, but the examples above show it’s not a straightforward task. So how do we go about it? The same way you make important decisions and face complex problems in your business: we dig into the data.
Below, we explain the three basic principles of what we look for in charities. For a detailed explanation of our methodology, see further down.
When assessing data on charities, one of the most important distinctions to keep in mind is the one between outcomes and outputs. Outcomes measure changes in what we ultimately care about. They include metrics like those mentioned above: lives saved, DALYs averted, improvements in learning, increases in consumption etc. Outputs measure the things we use to try and achieve those outcomes. They can be things like: number of vaccines distributed, number of clinics or school opened, number of books or tablets distributed, number of chickens gifted, or amounts of cash received by recipients.
Looking at outputs is important, because it tells us what charities are concretely doing. But, ultimately, what we care about is outcomes: whether (for example) charities are making people healthier, more educated and less hungry. And, as we’ve seen above, the step from one to the other can’t be taken for granted. That’s why it’s essential to track if, and by how much, outputs are leading to positive outcomes.
Imagine an organisation is working to increase literacy rates. Before their program, 20% of children could read. After the program, the number went down to 15% (see graph below). Our natural instinct would probably be to dismiss their work: after all, things have gone from bad to worse.
However, it would be premature to draw such a conclusion. Imagine now that a major natural disaster had hit the region. In the area where the charity didn’t operate, literacy rates decreased to 5%. In the area where the charity did operate, it decreased to 15%. The two areas were otherwise similar in all relevant respects, so the charity really improved literacy rates by 10 percentage points, which is a very good result.
And just like we might fail to recognise successes, it can also be hard to see losses. For example, we might find that, after a charity has implemented its program, things look a lot better. But here too, we could find out that the improvement has been brought about by something else - for example, economic growth, or a separate government’s intervention. Imagine a charity working to increase income. Before the intervention, average household income is $1000, and after the intervention it’s $1500. You might think this is enough to say the charity has done a great job. But once again, it would be premature. Imagine now the region has enjoyed very rapid economic growth, due to the recent development of more reliable transport infrastructures. In the area in which the charity did not work, income also reached $1500 per household. So the charity did not actually improve household income, but rather happened to be working with households whose income was increasing for independent reasons.
The moral is that in order to track the actual impact of programs, we need to look at the counterfactual: that is, we need to work out what the outcome would have been if the intervention had not taken place. The studies that look for this data are called impact evaluations.
Some impact evaluations use experimental or quasi-experimental methods. This means that they compare the group that has received the intervention (treatment group) with a group that is similar to the former, but has not received the intervention (control groups). Randomised control trials (RCTs) are often considered the ‘gold standard’ of impact evaluations.2 Participants are randomly allocated in either the treatment or the control groups. The intervention is then provided to the treatment group, and outcomes are measured. The difference in outcome between the two groups tells us the effect of the intervention: Since participants are allocated at random between control and intervention groups, there are no systematic differences between the groups, and discrepancies are most likely brought about by the intervention itself.3
Going back to the example above, a randomised control trial could help us identify the impact of the educational charity operating during a natural disaster (see graph below):
It’s not always possible to run randomised control trials, or quasi-experimental impact evaluations. This is often because it would be too difficult to pick ‘treatment’ and ‘control groups’ for those programs. For instance, it is often hard to test the impact of advocacy, or research, because we cannot control those activities in ways that allow us to randomise, nor do we have enough information to run quasi-experimental studies. In these cases, there are other methods for estimating the causal effect of charities’ work. One way to do so is to use process tracing - a qualitative research method that works to identify the causal process between an intervention and an outcome. In this case, we identify various plausible causal explanations for the observed outcomes, and verify which one is best supported by evidence. If the causal chain including the charity’s work is the most plausible explanation for the change, we can be more confident in the organisations impact.
Gathering evidence of effectiveness is a win-win situation: charities get a chance to demonstrate their impact (or, if the results are negative, understand what they’re doing wrong and improve their model), and donors are more empowered to do good. So why is it that many organisations still don’t do it?
There are two main reasons. On one hand, charities often don’t have the time, resources, or incentives to do so. Conducting research and impact evaluation requires a lot of resources. On the other, donors are often unaware of differences in effectiveness, lack time and resources to dig deeper, and end up making decisions on the basis of what they hear from the charities. This means they’re not creating the incentives needed for the charities to justify spending money on research. There’s also an aspect of industry standard at play here: rigorous and evidence driven philanthropy is a relatively recent concept in international development, and not too long ago it was rarely spoken about. There is now a growing movement of highly effective and data driven charities at work, but impact evaluation is still not considered the industry benchmark or best practice. In order for that to change, donors will have to be smart customers.
This points to an opportunity. Whichever qualities donors ask for, charities will compete on. If donors choose where to donate on the basis of data, charities will have a very good reason to start following data themselves. Creating the right incentives won’t just impact the charities which do get funding, it’ll change the behaviour of any charity competing for funding.
A wide knowledge base around social impact is ultimately good for everyone, because unlike in the private sector, demonstrating ‘proof of concept’ (evidence that something works) in the social sector doesn’t just benefit the instigator of the research; it benefits the whole eco-system of organisations trying to do good. Existing charities can use that expanding evidence base to improve their own programmes, new organisations can have a better idea of which strategies have succeeded and failed in the past, and donors can feel more empowered to make good choices.
Our framework for finding amazing donation opportunities is based on best practice in impact assessment, drawing on the latest academic standards in social sciences. We evaluate charity on three levels:
Focus area: when deciding where to donate, it can be helpful to distinguish different broad areas of focus. For instance, one could concentrate on malaria, micronutrient deficiencies, animal farming, climate change, and so on.
Intervention: in each cause area, we can look at different interventions - that is, different strategies to tackle the problem. For instance, if you are interested in improving education, you could propose to provide learning materials, train teachers, give scholarships etc.
Charity: for some types of interventions, there are several charities implementing them. For example, several charities work on deworming or provide scholarships. We want to find the organisation that implements the strategy most successfully.
Our framework is, so to speak, ‘top-down’: we start by looking at a focus area, then identify the most promising interventions within it, and then move on to find the best charities implementing those interventions.
There are countless important and urgent charitable causes in the world, often making us feel helpless when it comes to having to chose one over another. If you’ve already decided on which problem you want to focus on, we can go straight ahead with evaluating interventions. However if you don’t have one particular focus area you’ve decided on, we can help identify the area where your donation is most likely to make a big difference. To do this, we start by understanding your core values: for example, how much weight do you give to saving a life, as opposed to improving quality of life? How do you weigh the suffering of animals against the suffering of humans?
The second step consist of selecting the area it makes most sense to focus on, given your individual values. Overall, we aim to find the area where additional resources would bring about most impact.4 However, it is often difficult to find direct information about such wide questions. This is why it’s easier to break down the process in steps. To do so, we look at three factors: importance, tractability, neglectedness.
How many people are affected by the related problem? How badly does the problem affect them?
These questions are are both difficult to ask and to answer, and we appreciate that many subjective value judgements go into how important you may think a problem is. However, these questions are also crucial to deal with, as they have big implications on the impact of a donation.
Different problems affect different numbers of people, and affect them to different degrees. By focusing on areas that affect large numbers of people, and can significantly improve their lives, you can find interventions that make a huge difference.
How easy is it to improve or solve the problem? How expensive is it?
Some problems are very difficult to solve, others much easier. For instance, figuring out nuclear fusion would solve a lot of the world’s energy challenges, but it’s a very difficult problem to tackle. In other cases, however, solutions are already known. For example, we know that unconditional cash transfers increase household consumption, assets, business investment, and revenue. Based on this research, charities such as GiveDirectly transfer cash to households without strings attached.
How much attention is given to a problem and how many resources are employed to solve it?
The more people are trying to solve a problem (the more an area is ‘crowded’), the more likely it is that the most promising solutions have already been explored. In these cases, further resources are more likely to hit diminishing returns, meaning that further donations would do more good elsewhere. Conversely, if there is still ample space to absorb resources in an area, then your contribution is more likely to have a positive impact. For example, the WHO estimates that 43% of people at risk of the disease in sub-Saharan Africa had no protection, and that in in 2015 malaria funding was US$ 2.9 billion, only 45% of the funding milestone for 2020 .
Within most focus areas, there are several strategies proposed to solve the problem, some of which are likely to be more successful than others.
To begin with, we map the different possible interventions. As an example, let’s go back to malaria. The first thing we would do when looking at this challenge is compile a list of potential interventions:
- indoor residual spraying - spraying insecticide on surfaces where mosquitoes rest
- distributing bed nets - giving people nets to sleep under, to decreases the chances they’ll be bitten and infected)
- intermittent preventive therapy - treating and preventing malaria for the groups for which it’s most dangerous - children and pregnant women diagnosis and treatment
Once we have a good idea of what interventions can be employed to address the problem, we will look for effective interventions: we go through the academic literature detailing impact evaluations searching for interventions that have an effect on outcomes.
We then assess how robust the evidence of impact is. We do this by carefully evaluating the literature against several criteria: for instance, when assessing testable interventions, we ask questions like: how reliably can one infer causal impact from the methodology employed in the studies? Do the studies avoid possible biases? Do the studies show consistent effects?
Finally, we look for the most cost-effective interventions: the programs that lead to the biggest improvements per dollar spent, and ultimately do the most good, helping as many as possible.
One of the most important criteria in evaluating a charity is whether it is implementing interventions that are effective, cost-effective and supported by evidence. Sometimes charities run impact evaluations of their own work, in which case we can assess their work directly. Most of the time, however, charities do not have impact evaluations (see above. When this is the case, we start from the academic literature on similar programs, carefully compare it with the program run by the charity, and then use this comparison to estimate how effective the charity’s program is.
We also look at how well the organisation is structured and works internally. In particular, we ask the following questions:
- Strength of the team: how qualified and experienced are their staff members, leaders and board? Are they under- or over-staffed?
- Strategic direction: how clear are their goals? Do they have solid understanding of risks and ways to mitigate them?
- Financial soundness: how robust are their financial records and plans? How effectively are they able to fundraise?
- Monitoring and evaluation data: do they collect comprehensive and reliable data on their activities and outputs?
- Data and learning: do they employ data to make their strategic and operational decisions? Are they lean and flexible? Do they learn from their mistakes?
- Would they be able to use further funds productively? Do they have concrete plans for growth?
- Are they transparent about their activities, and any mistakes they have made?
- If they have been running for a while, have they had any success? If not, have they demonstrated willingness and ability to adapt?
As mentioned previously, personal values come into play when choosing how to donate. One aspect of this is appetite for risk.
For instance, you may be most comfortable supporting organisations which lead to moderate, but certain, benefit. On the other hand, you may prefer to support interventions that are considered high-risk and high-return. Examples of intervention types that would be classified as such are:
- Advocacy: advocacy organisations are generally considered high risk to donate to, because social norms and policies are very difficult to influence. However, ‘wins’ in this area can be huge, as their effects impact societal institutions: a positive outcome may have longer term impact, on a very wide set of people.
- Research: research that answers decision-relevant questions could direct resources towards more effective interventions, avoid wasting resources on programs that don’t work. A successful research project has a multiplier effect, as the findings can be used to improve effectiveness of both present and future funders. However, because of the very nature of research, it’s hard to tell which studies will make relevant and impactful discoveries, and what their effects will be, and this is why they are considered high risk.
- Seed funding: when we evaluate charities, one of the criteria we consider is track record - that is, whether charities have been successful at what they do. This is helpful, because it gives an indication of whether their methods and strategies work. But when considering charities that are just getting started, the risk of failure is larger, since there is less evidence in support of organisation’s effectiveness. However, as initial funding is often hard for charities to find, seed funding can have a very large counterfactual impact if it helps get effective charities off the ground.
Some funders prefer to create a ‘diversified donation portfolio’, combining both low- and high-risk donations.
For more details about how we evaluate charities and think about donating and social impact, do get in touch with us at email@example.com
1. Quantifying health is complicated. While there is debate around the appropriate use and calculation of DALYs, it is the most widely accepted and used metric amongst the most important organisations in the field, such as the World Health Organisation. ↩