Effective altruism (EA) is about trying to carefully reason how to do the most good. On the practical side, EA has inspired the donation of hundreds of millions of dollars to impactful charities, and lead to many new organisations focused on important causes. On the theoretical side, it has lead to rigorous and precise thought on ethics and how to apply it in the real world.
Regardless of your personal convictions on charity, ethics, or moral duty, the intellectual work that has come out of EA is valuable, especially in two ways.
First, much EA work is exceptional in the breadth and weight of the matters it considers. It is interdisciplinary, including everything from abstruse meta-ethics to the nitty-gritty of interpreting studies on the effectiveness of vaccination programs in developing countries. Because of its motivation – finding and exploring the most important problems – it zeros in on the weightiest issues in any particular area. EA work is a goldmine of interesting writing, particularly if you find yourself drawn in a discipline-agnostic way to all the biggest questions.
Second, EA work is often notable for a scientific precision of argument that is often missingof from discussions on abstract things (e.g. meta-ethics) or emotionally charged issues (e.g. saving lives).
This post explains the motivations behind EA, and has a table of contents for this post series.
Altruism, impartial welfarist good, and cause neutrality
I will have more to say in a later post about specific philosophical issues in defining what is moral. For now I will hope that the idea of an impartial welfare-oriented definition of good is sufficiently defensible that I will not be mauled to death by moral philosophers before that post (though if it doesn’t happen by then, it will certainly happen afterwards).
Impartial (in the sense of treating every person in a fair way, and not changing depending on who’s doing the judging) and welfare-oriented (in the sense of valuing happiness, meaning, fulfilment of preferences, and the absence of suffering) good is an intuitive and fairly unobjectionable idea. Yet careful consideration of what it implies points towards a different idea of charity than the current norm.
Most charities are single-issue charities. This generally makes sense: better to have one organisation be really good at distributing malaria nets and one really good at advocating for taking nuclear weapons off high alert, than to have one organisation doing a mediocre job at both (malaria net delivery via ICBM?). But the siloing of causes often goes further. If intervention effectiveness is considered, it is often after choosing a cause area. To weigh cause areas against each other, to judge the needs of African children against, say, factory farmed pigs, seems like a faux pas at best, and a sin at worst (for a particularly incendiary tirade on the topic, see this article).
However, if we hold ourselves to an impartial welfarist idea of good, this judgement must be made. An artist might choose what to paint based on how they want to express themselves or on a sudden flash of inspiration. A would-be altruist refusing to weigh causes against each other and instead selecting them on the basis of passion or inspiration is acting like our artist. In the artist’s case it doesn’t matter, but the altruist, in doing so, implicitly values their own choice and/or self-expression over the good that their actions might do. This is not altruism by our definition of good.
Of course, people differ in their knowledge and talents, and these tend to align with inspiration. In the real world, it may well be that your greater ability, drive, and/or knowledge in one area outweighs the greater efficiency at which results convert to goodness in some other area. We will also see arguments for not placing all our bets on the same cause, and explore the enormous uncertainties that come in trying to compare causes. But the idea of cause-neutrality – that causes are comparable, and that making these comparisons is an important part of the job of any would-be altruist – remains.
Effectiveness
Focusing on the idea of impartial welfarist good also makes it clear that, in trying to do good, we should focus on the good our actions result in. This may seem like an obvious statement, but it is not true of much charitable work.
For example, we tend to emphasise the sacrifices of the donor over the benefits of the recipients. Consider old tales of people like Francis of Assisi. Their claim to virtue (and sainthood) comes from giving away all their possessions, but the question of how much good this did to the beggars doesn’t come up. This attitude continues in the many modern charity evaluators that focus on metrics like percentage of money spent on overhead costs. Paying big salaries to recruit the best management and administration may genuinely be a cost-effective way of increasing the total good done, but it conflicts with our stereotype of self-sacrificing do-gooders. Of course, there is virtue in selfless sacrifice, but we should remember that the goal of charity is to make recipients better off, not to rank donors.
As with much human behaviour, charity often isn’t driven by rational thinking. Some consider this a good thing: altruistic acts should come from hearts, not spreadsheets. This is wrong – if you care about impartial welfarist good.
It is a fact about our world that good charity is hard, and that charities have vast differences in cost-effectiveness. When one charity results in ten or a hundred times more healthy years of life per dollar spent than another, boring details of statistical effectiveness become important moral facts. (This is true not just of charities, but most kinds of projects that might impact many people – government policy, activism, and so on.)
When the difference in effectiveness between different interventions is often greater than the difference to doing nothing at all, and when these differences are often measured in lives, effectiveness considerations are critical in any attempt to do good.
There is a role for simple, comforting altruism, but this role isn’t making big decisions over how to benefit others. These decisions deserve more than goodwill. They deserve analysis, they deserve work, and they deserve to be made right.
Opportunity
Debates over charitable giving often centre on questions of moral duty and obligation (a good example is Famine, Affluence, and Morality, Peter Singer’s classic paper that laid some of the foundations of what later became EA).
Another framing is to think of it as an opportunity. To someone who cares about impartial welfarist good, altruistic acts are not a burden but an opportunity to achieve valuable things. In particular, there are many reasons to think that we (as in developed-world humans of the early 21st century) have an exceptionally large opportunity to do good.
First, our values are better than those of people in preceding eras. This statement implies many philosophically contentious points, but for the time being I will not defend them, instead appealing to what I hope to be a common sense conviction that human morality isn’t nearly relative enough that it is impossible to differentiate modern secular humanist values from values that support war, slavery, and boundaries on personhood that exclude most people.
(Of course, this statement also suggests that our current moral views are far from perfect too. This is important, very likely true, and will be discussed at length in future posts. The fact that this is increasingly recognised is hopefully a hint that we are at least on the right track.)
Second, we have more resources than people in previous eras. There is also large variation in global income, meaning greater opportunities for those in rich countries to help others for cheap. A 2-adult, 1-child UK household with a total income of £30,000 is in the top 10% of the world income distribution and 7 times richer than the median global household.
Third, though the peak of global income inequality has already passed due to developing countries starting to catch up, knowledge on what is effective has increased and technology make it easier to apply this knowledge. Today GiveWell’s thorough charity research can multiply the impact of giving. Twenty years ago, there was no GiveWell. Two hundred years ago, donation guidance, if it existed, might have consisted of the church telling you to donate to them so they can convert people and push their social values.
Fourthly, we may have an unprecedented ability to affect where civilisation is headed. The steepness of technical advancement increases the variance of possible future outcomes: in the next few decades we might nuke each other or engineer a pandemic – or we can set ourselves on a trajectory towards becoming a sustainable civilisation with billions of happy inhabitants that lasts until the stars burn down. Past eras didn’t have similar power, and if the future goes well humanity will no longer be as vulnerable to catastrophe as we are today, so people living roughly today might have exceptional leverage.
Common EA cause areas
The cause areas most frequently seen as important, and most specific to EA relative to what other charities focus on, are:
- Global poverty, because the developing world is big, poor, and has many tractable problems with well-researched solutions.
- Animal welfare, because it is largely ignored, and potentially huge in scope (depending on how much animal lives are valued).
- Long-termism; focusing on the distant future (and particularly avoiding human extinction) because of the overwhelming amount of good that may come to be if the future goes well.
These are far from the only cause areas discussed in EA. Many EA-affiliated people argue either against some of the above, for the overwhelming importance of one relative to the other, or for entirely different causes.
Effective altruism in practice
In practice, EA can seem weird and theoretical.
The main reason for EA weirdness is that it casts a wide net. Everyone agrees that international peacekeeping is an important project, and also a serious one: it doesn’t get much more serious than world leaders intervening to get men with big guns to have big talks about their big disputes. On the other hand, the colonisation of space is important, but seems to have very little gravitas indeed; it’s something out of a science fiction novel. However, just as it’s a brute fact about the world that there are lots of violent people with big guns, it’s also a brute fact that space is big; both of these facts should be taken seriously when considering the long-run future. Reality is not split by genre.
More generally, it’s important to keep in mind that every moral advance started out as a weird idea (for example, it was once considered crazy to suggest that women should get to vote).
Parts of EA are very theoretical. This, too, is by design. Future posts will show many cases where which way we resolve a very abstract issue has a big impact on what the right practical action is – and in many of these cases it is unclear what the right resolution is. Finding out clearly matters.
If EA seems too theoretical or mathematical to you, consider two points. First, whatever the field, doing complex things in the real world tends to involve theoretical heavy lifting. Second, most charity efforts don’t pay much attention to theoretical issues; EA is at very least a helpful counterweight, and likely to uncover missed opportunities.
Whenever the goal is to do good, it is easy to be overwhelmed by feelings of righteousness and forget theoretical scruples. Unfortunately we don’t live in the simple world where what feels right is the same as what is right.
The core of effective altruism is not any particular moral theory or cause area, but a conviction that doing good is both important and difficult, and hence worthy of thought.
This post series:
- Rigour and opportunity in charity: this post.
- Expected value and risk neutrality: expected value is the right way to think about risk (even though you have to be careful to not be too simplistic about it). Effective charity might often involve large upsides but also low probability of success.
- Moral and epistemic uncertainty: we are uncertain about both what is right and what the world is like. The former means we have to consider how to act when we find many moral theories at least slightly plausible. The latter leads to questions of how do we act given this uncertainty.
- Utilitarianism: while not a prerequisite for other EA ideas, I argue that utilitarianism is generally a sensible and humane moral system, with the particular advantage of extending easily to big and weird issues, and that most things we regard as moral advances have been shifts to more utilitarian morality.