Making choices is the base state of being human. From choosing what to play with as kids to choosing what to study as young adults to choosing where to work to choosing whom to go out with, we’re mired in choices from the moment we’re born to the moment we die. In almost all of those choices we are meant to make, both nature and nurture limit the menu in front of us. Most of us don’t choose to not become an astronaut or choose to not become an underwater oil rig diver, those choices just wither away in the branches of the tree of life as we move forward.
And yet, the ambition remains of choosing the best option. Choosing the best ice cream, choosing the best music to listen to, choosing the best major in college, choosing the best job. All of which rely on some version of ranking some criteria and maximising the output. Because the truth is, we don’t live in small villages anymore. The infinitude of choices would overwhelm us if we could even glimpse at the true breadth. Thus is born and dies the spirit of utilitarianism that is inherent in every child.
But once you’re a young adult one tends to learn more about the world and want to do what young adults do. Which, in most cases for the most ambitious folks, turns out to be an incredible urge to have an indelible impact on the world. This feeling of slight vacuousness combined with incredible energy and a need for a community that understands them doing extraordinary work, has been fodder for all sorts of organisations to find talent. Have a look at this list from my friend Paul Millerd on various promises companies make in their recruitment pitches.
But this is fine. It’s normal and pretty much the same sales pitch that pretty much most organisations that run on the basis of the ambition of the young do. If it’s being explicitly pushed towards making the world a better place, that’s a good thing.
However, the choices you make are circumscribed by that which one can see. Complexity, as always, is the undoing of God’s eye view of optimisation for us mere mortals. As the world gets larger it gets harder to make the best choice.
Just like we didn’t know how best to organise our economy and distribute resources in order to get the best possible outcome, we also don’t know how best to employ our resources to do the most good in the world.
Since we don’t actually know, we have to find out, to estimate. Which is why today, a large part of EA is essentially McKinsey for NGOs. NGOs and aid organisations are notoriously prone to bad governance, bad management and high degrees of inefficiency. It’s not because it’s run by bad actors by the way, it’s because lack of a sufficiently strong steering force often makes course correction well nigh impossible.
All this to say, EA has the same failure modes. As an institution, a movement, it has had incredible success and built a very strong community with talented individuals joining. However, its failure-modes therefore are twofold:
It will not succeed in its stated aim of doing the most good in the world with its current approach, because most of the interesting things it can do are currently out of scope
It needs to radically increase its focus on doing rather than prioritising, thinking and researching if it’s to fulfil the potential of its members
EA does the prioritization game very well. Give it a set of options and they analyse them to the nth degree, assign numbers where they can, assign probabilities of uncertainty where they can’t, create calculations of QALY benefits, and compare the list.
At McKinsey when I was there, we would do the same, and often compare alternatives in an Impact Vs Tractability matrix. Measuring Impact and Tractability would naturally entail a large degree of guesswork, but the alternative is throwing darts at a board and going with one’s gut. Prioritisation is much the same, as are the criticisms of the process.
But the Knightian uncertainty is high. And the output is often as not a document to explain what you found to convince others. Which means everyone is stridently trying to create convincing pitches to each other about the cause areas they’re most passionate about. But there’s not enough actual action. It reminds me of nothing as much as pitch books being made inside Goldman Sachs. It’s trying to articulate a vision of the future, in as much detail as you can with the primary purpose of using it to push forward an agenda and convince others.
And yet, and yet, no matter how you slice it, there’s a reason McKinsey isn’t known for being extraordinarily innovative in its prescriptions. It’s not because of lack of intellectual rigour or the will to do it, it’s that the organisational ethos lends itself to optimisation, to doing better the things they can measure. Just as its job is to shore up the uncertainties involved in making choices between options that are semi-defined, it is often blind to the potential for de novo creation.
To me this is at the heart of my disagreement with EA. I’m not an EA not because of a fundamental issue with altruism, or indeed an effective version of it, or even the cold utilitarian calculus leading me astray, but because I feel the mechanism by which EA tries to impact the world is, in some core ways, insufficient. Its core methodology, of figuring out and then doing the thing that does the “most good” isn’t all that effective, considering the fact that the greatest poverty alleviation and life improvement we’ve seen in the past fifty years have come from economic growth and development. The choices EA sees in front of it is only a fraction of the total picture.
This seems partly why a large part of EA aims to solve negative externalities—remove xrisk, reduce suffering, reduce short-term thinking—rather than actively focusing on creating positive impacts. It focuses on areas where it can either create defined measurements or create highly unspecific (not even false) Drake Equations. Meanwhile our biggest strides towards both social and material progress has been in encouraging technological and economic progress, which is an area that seems to get removed from the portfolio of things EA should consider.
It’s the Scylla-Charybdis of choices. Either EA can completely open up its decision making process, saying do good however you can, including through startups and lobbying and research and exploration and fiction and poetry and community organising, or it restricts to the areas it calculates as most important while ensuring the community it created remains alive and engaged!
I wrote tongue in cheek that EA should sponsor science fiction, because that is upstream of the motivation for so many of the people who have fundamentally reshaped the world. But that’s not the sort of thing that gets funded, because it’s highly illegible.
EA therefore is playing an incredibly important role mainly focusing on raising our baseline. It treats as the primary good the fact that minimising suffering is good. Its counterpart might be the newly forming field of Progress Studies, who are much more focused on doing the necessary experiments to increase the pie, starting with the argument that economic growth has been the only silver bullet in our arsenal historically, and that’s where we ought to focus.
The difference is that the shore up the bottom strategy tries to push the entire curve up but mainly works at the long tail. The other is what’s used amongst the top quintiles and deciles, which help push up the top end of the curve. And the strategies they make eventually trickle down and become common knowledge, and once diffused stop being sources of alpha.
On the first bit of what actually what has actually helped in doing the most good, we do have some benefit of hindsight. The past century is known as part of the Information Age. Or the Age of Globalisation. But when we look at the past century one thing that jumps out is the extraordinary reduction in human suffering.
Poverty isn’t the same as suffering of course, but they’re awfully well correlated.
When we think about the stated ambition of reducing human suffering or increasing human happiness, if the options are to identify the sources of suffering and to eliminate them, whether that’s disease or poverty, we see direct courses of action. You’re one step removed from making your dollar truly count.
But the chart tells a different story. The biggest suffering elimination effort in the history of the world happened in the latter half of the 20th century, where China got wealthier. It moved 600 million people, more than the population of the planet a century or so ago, out of abject poverty. This is remarkable!
This is also something that wouldn’t have happened nearly as fast or as much had we looked at those individual citizens and tried various versions of redistribution. We might have been able to kickstart some form of bottom-up ability for some of them to try building things they dreamt of, but it wouldn’t have been easy. It wouldn’t have had the enormous power of an entire state upping its capabilities and providing the strong base that drives growth.
That is also why the giving had to be focused on areas which are marginal (because there’s only so much), neglected (otherwise you’re not having counterfactual impact) and tractable (obviously). Which moves the goalposts enough that you both identify the major areas which need to be worked on, and also eliminate the ones where you think enough work is being done.
The only way we have figured out how to solve this cognitive overload is by not deciding this at the top. Focusing on neglectedness is like trying to focus on a new niche to start a company in. That’s traditionally not how you identify what becomes the best ideas.
And that’s the second point. Action and iteration are the best ways we’ve found to solve big problems, starting from tiny seeds. Just as startups provide this impetus in the commercial world, EA should help provide this impetus in the aid world.
If we look at its greatest successes, such as GiveDirectly, those have been successes born of simplicity, legibility and exceptional value created. Those achievements stand true regardless of whether those are indeed the “best” ways to spend that money.
Like we can’t know if saving a life was the best way to spend that money, vs investing it and saving three lives a decade from now.Regardless of the QALY assumptions or utilitarian calculus of how many more people we can help. If Bill Gates hadn’t started his Foundation until now, he would have had $500 Billion to give away, instead of a fifth of that. Was that the right decision?
Imagine you have a choice. You could give $10k to EA and save 2-3 lives. Or you could invest that money. With the investment you save 10 lives in 30 years, which at a 3% discount rate still comes out to more, at 4 lives. Which do you choose? If you had a discount rate on future lives you could solve for the equilibrium.
The gut reaction I have is to choose the first. It’s immediate, there’s an actual suffering person whom you are actually helping, and it’s certain. But the uncertainty is also why there is a range of outcomes for the future and a discount rate!
What if we all collectively invest the capital, which increases the growth rate of the economy, which makes everyone wealthier? We know that if you’re above median income your life expectancy is higher. If you move from poorest 1% to richest 1% it is 14.6 years extra. That’s 15-16% higher! Which means if you manage to move a chunk of people up the income ranks, as the country gets wealthier, there is a substantial benefit to health and life expectancy!
The point is not that you can’t reason your way through the dilemma. The point is that we have a preference for helping people today and a preference for helping people in the immediate (and long) future. These are not necessarily compatible or comparable, but we choose our own ethical systems to allocate our resources. From a revealed preference point of view, it’s some combination of utilitarianism (if the IRR is greater than 17%, invest), deontology (we should help the drowning child if we see her) and virtue ethics (I should give 10% of my earnings because that’s the right thing to do).
Any combination of these three, subject to the constraints of our evolved inner self, will result in something resembling the modern system—where you can run a giant crypto hedge fund and do good with that money, people donate their money, time and energy to help people they have never seen or met, and where people donate their kidneys to strangers! It bears repeating—people literally donate their kidneys to strangers! The individual actions of EA folks are nothing short of exemplary (except for small nooks where I disagree), and this shouldn’t be discounted!
Imagine if a single venture capital firm were to be charged with identifying and investing in the best companies of the future. It would be an abject failure. It is only through the cultivation of 1000s of VC firms, with widely varying lenses and investment styles and sourcing networks and diligence methods that we are able to get to anything resembling a functioning market for identifying future performance.
The credential and logic constrained conversations around identifying the best possible marginal opportunities puts way too much emphasis on the ability of the leadership/ team/ individuals in the team to be able to identify those opportunities. This is too high a bar!
The answer to my contentions is that EA needs to create a city. There are essays about how EA isn’t all command and control and there’s heterogeneity of opinions, and on the one hand they’re right because it’s not like Bill & Melinda Gates Foundation where there’s a clear leader at the top. But it’s also not like the different organisations under the EA umbrella are pursuing drastically different ideas of what “do the most good you can” is—they’re singing from the same hymn book. It has many heads, but the same body.
Money being fungible, there are a lot of ways to accomplish the goal of doing the most good you can. Funding to eradicate malaria sounds awfully close to funding to create mRNA vaccines, as Gates Foundation does. Fighting global catastrophic risks through advocating for asteroid defense is awfully close to building space colonies or settling Mars. Except the first happens to fall into an “EA” bucket and the latter falls into “billionaires with dreams” bucket or the “venture capital” bucket.
The point is not just that the categories were made for man, but that fulfilling an overarching purpose requires a plurality of attempts. The current attempts mostly seem like overly analytical, dare I say academic, affectations where the underlying thesis is close to “if we get the right idea in the right shape we could change everything”. Whereas the actual better solution probably ought to be far more action focused.
Is the difference purely that one is defensive and another is offensive? If so those differences are widely exaggerated in the present and highly ephemeral by the future! EA therefore should be wildly more open to ideas that have the prospect of making humanity better, even if it doesn’t pass the immediate QALY inspection.
It’s not just this. For a lot of wicked problems, the only way to find good solutions is to actually do them. Like with every startup, the only way to achieve success is to do the startup, since the pivots and changes and edits and the general path that a company takes from an idea to a successful company is both unknowable from the beginning and unchartable.
What would heterogeneity look like here? Is it much more diversity of grant giving organisations? Higher level of experimentation, away from the tyranny of QALY measurements?
The way to do it would be to create a city, one that ideally brings together people who do want to do the most good, but then gives them the freedom to go build what they need to build. It might be best seen as a proto-network-state as Balaji might say. Or indeed the libertarian argument that the EA initiatives are still very command-and-control and not market-based.
Maybe run prediction markets on what the highest impact investments should be and allocate funding on the basis of people putting money (votes) where their mouth (opinion) is. This would simultaneously mean removing the centralised element of trying to figure out which things to fund and rather see how the city itself flourishes. It’s the only way to have both “individuals work on their best guesses on how to do most good” and “we can’t solve every problem that the world faces top down”.
Which means a modified form of the current EA program today comes down to—“do the most good you possibly can, once you figure out what to do”—thus removing both the issue of having centralised authorities with limited precognitive abilities, the difficulties of dealing with illegible goods, and your own barometer of how important “doing good” is compared to your other motivators like “travel to see the Parthenon” or “buy a cool car”.
This is of course fine. No movement needs to answer all of our altruistic efforts. Its addressable market doesn’t need to be “all the good done in the world”. It might be the marginal path that tries for moonshots leaving the middle to the NGOs and governments or the mainstream path that solves the middle of the distribution while the ends remain free and require much more idiosyncratic intervention. (with all due apologies for the two dimensional characterisation). It can solve the areas that are better defined, the more utilitarian and prosaic analyses leading to malaria nets, and the slightly more complex ones of existential risk reduction through nuclear safety advocacy or bioweapons research bans.
The combination of belief in probabilistic legibility and increased belief in doing research combines in some of the longtermist work that EA espouses. Not because of an inherent flaw in a mathematical sense, though I don’t buy the zero discount rate argument for either geography or time, but from an actual implementation point of view.
A key problem of following the trajectory to fulfil the x-risk, AI safety type work is that a large part of EA work becomes, in effect, maintenance work. If the prize for guessing correctly is 10^6x, then you might end up dedicating large swathes of your efforts to the far wilds of the probability fields. And I fundamentally don’t think our methods of identifying the best low-probability-high-reward areas of explorations are anywhere near good enough.
We are bad at this in government, we’re bad at it in social sciences, we’re bad at it in the real sciences as often as not, we’re bad at it in grant giving and we’re bad at it in venture capital investments. It strikes me that beyond a barrier of plausibility, we’re just bad at recognising a 0.1% chance event from a 0.00001% chance event, or even from a 1% chance event.
What this means is that the focus on solving “big important problems” becomes a problem of “finding the big important problem”, and we’re back to dedicating large amounts of administrative and bureaucratic effort in figuring out what work to do. It’s worth noting that the biggest impacts of EA have really been in areas like GiveDirectly, which tries to reduce or eliminate that very burden.
I also worry, because I like EA, that this approach restricts its longevity.
If people spend a decade or more in areas with no real payoffs or real benefits leaking into the real world, then you’re burning out the smart people today and creating a much bigger barrier to attracting the next generation. Burnout by itself doesn’t sound like a big problem to me, many large successful companies very happily use burnout as fuel to their growth, but burnout without a clear payoff elsewhere is a no-win situation.
If an ever larger chunk of the conversation around EA revolves around its focus on the low-probability-high-impact areas, that will fundamentally distort both the experience of existing EAs and future EAs who want to join it. Yes, preventing biorisk at 0.01% chance is important, but you better also have enough clout that when it fails (as it will 9999 times out of 10000), you’re not just a demoralised husk left behind.
This is why my concern is that EA is focused on a weird barbell strategy—items it can legibly analyse and prioritise in a matrix, while also dabbling in high-risk longtermist bets which might never pay off! It’s doing this at the cost of not actually engaging in effort which can actually make life better for a lot of people.
EAs have created a strong social network and community. They sacrifice parts of their salary and even choose careers that maximise their ability to do so. It is rare to see a relatively bottom up movement successfully capture the hearts and minds of a meaningful chunk of smart people, and have them work directly towards the betterment of fellow man.
This is fantastic!
A lot of the criticisms made about it are also perfectionist fallacies. The criticisms compare EA to a Platonic ideal of a giving-focused organisation, and find the ways in which it falls short. Like I take it as granted that an organisation dedicated to doing X will oftentimes have difficulties with the fuzzy edges of X. I also take it as granted that some percentage of people, if they put their heart and soul into doing one thing and being a part of one organisation, might end up hitting a burnout wall.
But those aren’t inimical to EA the organisation. Most real world implementations fall short of perfection, often far short, but they still work. I wanted to look at ways in which EA the organisation might hit a wall if they go the way they’re going, which is not wrongTM but rather counterproductive.
And so my overarching feeling is that considering EAs want to do the most good they can, the way they’re looking at the world feels prone to self-defeat. It’s more a commentary on what I find implausible in its attempt to fulfil its own goals, which would be a damn shame because its goals are good!
EA makes me believe in a world where having high degrees of impact is easy, where you get to work alongside really smart, driven and kind people, where you believe in the possibility of a million year long sojourn for humanity, where people donate kidneys to strangers, where everyone is expected to be of roughly equal worth regardless of birthplace or birth year. This feels very close to being a brilliant society to be a part of.
EA also provides a moral framework, a social community of like minded people, and an institutional apparatus to create change in the world. To me this is EA’s absolute strength. It is a positive ideology. It helps give smart, young, ambitious people a way to clearly have an impact in the world and do good. And there are far fewer of these than there should be. These are the exact feelings that large, prestigious organisations like Google, Goldman Sachs or McKinsey link with when they ask the very same people to come work for them in their recruiting pitches. I’d rather have EA have its due too.
While it doesn’t solve all the problems it can, or work optimally in all regards, that’s also a rather silly goal to hold onto. And no number of silly blogposts decrying the exact method of grant-making matters to this, since coarse grained large actions matter much much more than fine grained finetuning. Mostly though I wish EA didn’t stake out strong moral baileys in the form of dogmatic beliefs, philosophical underpinnings and ultimate aims, which turn out to be common sense mottes whenever they are poked or prodded.
Does it do good? Yes. Does it do the most good it can? No. Does its adherents argue incessantly about whether the way it does good is optimal? Oh god yes. Does everyone have multiple philosophical disagreements which ultimately don’t matter all that much but makes everyone feel like they have a voice? Seems that way.
And so, I hope it continues to give meaning to many more lives, and save many many more as a result.
My sense is you think EA should be more ambitious, and I agree. But also I don’t think EA should always have been more ambitous, ex ante. When EA was growing the careful work they did seemed reasonable, and I am not sure they should have predicted the wild success that happened.
(please someone make the opposite case here)
In that sense, EA did a lot [citation needed] 100,000s of people are alive because of EA who otherwise wouldn’t be. And yeah I want to funamentally improve governance but:
I would like not to die soon—while I agree that we are uncertain about unlikely outcomes, we aren’t always talking about unlikely outcomes. A survey of normal (not just EA) AI researchers found that nearly half had awful outcome probabilites of 5% or higher. This isn’t a .1 or a .01 and it may not be a 1% it may be 10% or higher.
I think that I find cities cool and it would seem convienient if this was really the best way to help the world.
(I know Rohit personally)
Someone else can respond better than me but EAs work on a lot of economic growth stuff I think. Though our failure to make this comprehensible to outsiders with a forum page called “EA economic growth projects” is a genuine failure. I wish we had a collaborative summary.
Some suggested work:
Open Philanthropy funded a load of work on countries loosening fiscal polisy which has, I think had 10s − 100s billions impact on national budgets
I’m sure someone is looking at economic growth
More broadly, economic growth is a hard, problem. Many people are working on it. I think it was good that initially EAs assumed that wasn’t where they could add a lot more than was already present. As EA has grown, my sense is that’s not the case.
I really wish my comments could float beside the text.