Effectiveness is a Conjunction of Multipliers
Epistemic status: Not new material, but hopefully points more directly at key EA intuitions.
Ana is a hypothetical junior software engineer in Silicon Valley making $150k/year. Every year, she spends 10% of her income to anonymously buy socks for her colleagues. Most people would agree that Ana is being altruistic, but not being particularly efficient about it. If utility is logarithmic in income, Ana can 40x her impact by giving the socks to a local homeless person instead who has an income of $5000. But in the EA community, we’ve noticed further multipliers:
40x: giving socks to local homeless people instead of her colleagues
10x more: giving socks to the poorest people in the world (income $500) instead of homeless people
2x more: giving cash (GiveDirectly) instead of socks
8x more: giving malaria nets rather than cash
10x more: farmed animal welfare rather than human welfare. [1]
4x more: working in a more lucrative industry like quant research, working longer hours, and doing salary negotiation to raise her salary to $600k[2]
8x more: donating 80% instead of 10%
10x more: taking on risk to shoot for charity entrepreneurship or billionairedom, producing $6M of expected value yearly[3]
Total multiplier: about 20,480,000x[4]
I think that many people new to EA have heard that multipliers like these exist, but don’t really internalize that all of these multipliers stack multiplicatively. If Ana hits all of these bonuses, she will have a direct impact 20,480,000 times larger than giving socks to random colleagues. If she misses one of these multipliers, say the last one, Ana will still have a direct impact 2,048,000 times larger than with the initial socks plan. This sounds good until you realize that Ana is losing out on 90% of her potential impact, consigning literally millions of chickens to an existence worse than death. To get more than 50% of her maximum possible impact, Ana must hit every single multiplier. This is one way that reality is unforgiving.
Multipliers result from judgment, ambition, and risk
Good judgment: responsible for multipliers (1) through (4), making the impact 80,000 times larger, and is implicit in (8) too, because going through with a bad for-profit or charity startup idea could be of zero or even negative value.
Ambition: responsible for multipliers (6) through (8), making her expected impact 320x larger.
Willingness to take on risk is mostly relevant in (8), though you could think of (5) as having risk from moral uncertainty.
This example is neartermist to make the numbers more concrete, but the same principles apply within longtermism. For a longtermist, good judgment and ambition are even more critical. It’s difficult to tell the difference between a project that reduces existential risk by 0.02%, a project that reduces x-risk 0.002%, and a worthless project, so you need excellent judgment to get within 50% of your maximum impact. Ambition is in some sense what longtermism is all about—longtermist causes have a huge multiplier resulting from astronomically larger scale and (longtermists argue) only somewhat worse tractability. And taking on risk allows hits-based giving whether in neartermism or longtermism.
More generally, actions, especially complicated actions and research directions, are an extremely high-dimensional space. If actions are vectors and the goodness of an action is its cosine similarity to the best action, and your action is 90% as good as the optimum (25° off the best path) in each of 50 orthogonal directions, the amount of good you do is capped at 0.9^50 = 0.005x the maximum.
Implications
It’s very difficult to take an arbitrary project that you’re excited about for other reasons, and tweak it to “make it EA”[5]. An arbitrary project will have zero or one of these multipliers, and making it hit seven or eight more multipliers will often make it unrecognizable.
People who are not totally dedicated to maximizing impact will make some concession to other selfish or altruistic goals, like having a child, working in whichever of (academia, industry, other) is most comfortable, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their “EA part” should try much harder to make a less costly concession instead, or find a way to still hit the multiplier.
It’s more important to have good judgment than to dedicate 100% of your life to an EA project. If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours. But if bad judgment causes you to miss one or two multipliers, you could make less than 10% of your maximum impact. (But note that working really hard can sometimes enable multipliers—see this comment by Mathieu Putz.)
Information is extremely valuable when it determines if you can apply a multiplier. For example, Ana should probably spend a year deciding whether she’s a good fit for charity entrepreneurship, or thinking about whether her moral circle includes chickens, but not spend a year choosing between two careers that have similar impact. Networking is a special case of information.
Finding multipliers is hard, so most people in the EA community (likely including me) are missing at least one multiplier, and consequently in some sense doing less than 50% the good they could be.
- ^
Assumes 40 chicken QALYs/$, 1 human QALY/$100, and that 400 chicken QALY = 1 human QALY due to neuron differences. Ana’s moral circle includes all beings weighted by neuron count, but she hadn’t thought about this enough.
- ^
As of 2022, typical pay for great quant researchers with a couple of years of experience, or great developers with a few years of experience.
- ^
Ana is in theory ambitious and skilled enough to start a charity or tech startup, but she hasn’t heard of Charity Entrepreneurship yet.
- ^
Could be off by 10x in either direction, but doesn’t affect my core point.
- ^
“make it EA” = “make it one of the highest-impact things you could be doing”, not “make the EA community approve of it”
- Prioritizing x-risks may require caring about future people by 14 Aug 2022 0:55 UTC; 182 points) (
- Terminate deliberation based on resilience, not certainty by 5 Jun 2022 20:08 UTC; 152 points) (
- 28 Apr 2022 3:22 UTC; 150 points) 's comment on My bargain with the EA machine by (
- 20 May 2022 12:01 UTC; 145 points) 's comment on “Big tent” effective altruism is very important (particularly right now) by (
- Problems with free services for EA projects by 3 Aug 2023 15:28 UTC; 134 points) (
- $5k challenge to quantify the impact of 80,000 hours’ top career paths by 23 Sep 2022 11:32 UTC; 126 points) (
- Taking prioritisation within ‘EA’ seriously by 18 Aug 2023 17:50 UTC; 102 points) (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate by 26 May 2023 17:30 UTC; 84 points) (
- EA & “The correct response to uncertainty is *not* half-speed” by 9 Apr 2023 16:39 UTC; 83 points) (
- Philanthropy to the Right of Boom [Founders Pledge] by 14 Feb 2023 17:08 UTC; 83 points) (
- Nuclear Risk and Philanthropic Strategy [Founders Pledge] by 25 Jul 2023 20:22 UTC; 83 points) (
- 29 Jun 2022 8:59 UTC; 75 points) 's comment on Community Builders Spend Too Much Time Community Building by (
- Future Matters #1: AI takeoff, longtermism vs. existential risk, and probability discounting by 23 Apr 2022 23:32 UTC; 57 points) (
- Beware Invisible Mistakes by 4 May 2022 19:34 UTC; 56 points) (
- Comparing Health Interventions in Colombia and Nigeria: Which are More Effective and by How Much? by 24 Mar 2023 13:48 UTC; 49 points) (
- How We Think about Expected Impact in Climate Philanthropy by 28 Nov 2023 19:02 UTC; 39 points) (
- 12 Jun 2022 13:59 UTC; 37 points) 's comment on High Impact Medicine, 6 months later - Update & Key Lessons by (
- 12 Dec 2022 15:57 UTC; 36 points) 's comment on Jonas’s Quick takes by (
- 5 May 2022 19:42 UTC; 34 points) 's comment on The AI Messiah by (
- EA Updates for April 2022 by 31 Mar 2022 16:43 UTC; 32 points) (
- 13 Jun 2022 1:45 UTC; 28 points) 's comment on High Impact Medicine, 6 months later - Update & Key Lessons by (
- Four management/leadership book summaries by 19 Aug 2023 23:38 UTC; 25 points) (LessWrong;
- 2 Jan 2023 0:17 UTC; 24 points) 's comment on Your 2022 EA Forum Wrapped 🎁 by (
- 3 May 2024 10:30 UTC; 18 points) 's comment on Thomas Kwa’s Quick takes by (
- Winning doesn’t need to flow through increases in rationality by 2 Jun 2023 12:05 UTC; 13 points) (LessWrong;
- 7 May 2022 21:39 UTC; 10 points) 's comment on What are the coolest topics in AI safety, to a hopelessly pure mathematician? by (
- 5 Jul 2022 2:07 UTC; 7 points) 's comment on All moral decisions in life are on a heavy-tailed distribution by (
- More to explore on ‘What do you think?’ by 9 Jul 2022 23:00 UTC; 7 points) (
- 12 Jul 2022 3:35 UTC; 6 points) 's comment on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform by (
- 20 Aug 2022 9:25 UTC; 6 points) 's comment on Prioritizing x-risks may require caring about future people by (
- 20 Apr 2022 12:11 UTC; 3 points) 's comment on EA Houses: Live or Stay with EAs Around The World by (
- 29 Sep 2023 16:09 UTC; 3 points) 's comment on AMA: Christian Ruhl (senior global catastrophic risk researcher at Founders Pledge) by (
- 1 Apr 2022 19:48 UTC; 3 points) 's comment on 5 Tips for Good Hearting by (LessWrong;
Great post! While I agree with your main claims, I believe the numbers for the multipliers (especially in aggregate and for ex ante impact evaluations) are nowhere near as extreme in reality as your article suggests for the reasons that Brian Tomasik elaborates on in these two articles:
(i) Charity Cost-Effectiveness in an Uncertain World
(ii) Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness
I mostly agree; the uncertain flow-through effects of giving socks to one’s colleagues totally overwhelm the direct impact and are probably at least 1/1000 as big as the effects of being a charity entrepreneur (when you take the expected value according to our best knowledge right now). If Ana is trying to do good by donating socks, instead of saying she’s doing 1⁄20,000,000th the good she could be, perhaps it’s more accurate to say that she has an incorrect theory of change and is doing good (or harm) by accident.
I think the direct impacts of the best interventions are larger than their expected (according to our current knowledge) net flow-through effects in a trivial sense, since if nothing else we can analyze flow-through effects of arbitrary interventions and come up with better interventions that optimize for this until we find the best ones.
I agree—and if the multiplier numbers are lower, then some claims are don’t hold, e.g.:
This doesn’t hold if the set of multipliers includes 1.5x, for example.
Instead we might want to talk about the importance of hitting as many big multipliers as possible. And being willing to spend more effort on these over the smaller (e.g. 1.1x) ones.
(But want to add that I think the post in general is great! Thanks for writing this up!)
Well, you know what the stereotype is about women in Silicon Valley high tech companies & their sock needs… (Incidentally, when I wrote a sock-themed essay, which was really not about socks, I was surprised how many strong opinions on sock brands people had, and how expensive socks could be.)
If you don’t like the example ‘buy socks’, perhaps one can replace it with real-world examples like spending all one’s free time knitting sweaters for penguins. (With the rise of Ravelry and other things, knitting is more popular than it has been in a long time.)
Great post, thanks for writing it! This framing appears a lot in my thinking and it’s great to see it written up! I think it’s probably healthy to be afraid of missing a big multiplier.
I’d like to slightly push back on this assumption:
First, I agree with other commenters and yourself that it’s important not to overwork / look after your own happiness and wellbeing etc.
Having said that, I do think working harder can often have superlinear returns, especially if done right (otherwise it can have sublinear or negative returns). One way to think about this is that the last year of one’s career is often the most impactful in expectation, since one will have built up seniority and experience. Working harder is effectively a way of “pulling that last year forward a bit” and adding another even higher impact year after it. I.e. a year that is much higher-impact than your average year, hence the superlinearity.
Another way to think about this is intuitively. If Sam Bankman-Fried had only worked 20% as hard, would he have made $4 billion instead of $20 billion? No. He would probably have made much much less. Speed is rewarded in the economy and working hard is one way to be fast.
This makes the multiplier from working harder bigger than you would intuitively expect and possibly more important relative to judgment than you suggest.
(I’m not saying everyone reading this should work harder. Some should, some shouldn’t.)
Edited shortly after posting to add: There’s also a more straightforward reason that the claim “judgment is more important than dedication” is technically true but potentially misleading: one way to get better judgment is investing time into researching thorny issues. That seems to be what Holden Karnofsky has been doing for a decent fraction of his career.
A key question for whether there are strongly superlinear returns seems to be the speed at which reality moves. For quant trading and crypto exchanges in particular, this effect seems really strong, and FTX’s speed is arguably part of why it was so successful. This likely also applies to the early stages of a novel pandemic, or AI crunch time. In other areas (perhaps, research that’s mainly useful for long AI timelines), it may apply less strongly.
I agree that superlinearity is way more pronounced in some cases than in others.
However, I still think there can be some superlinear terms for things that aren’t inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people.
The examples you give fit my notion of speed—you’re trying to make things happen faster than the people with whom you’re competing for seniority/reputation.
Similarly, speed matters in quant trading not primarily because of real-world influence on the markets, but because you’re competing for speed with other traders.
Fair, that makes sense! I agree that if it’s purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable.
I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people’s careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.
FWIW I think superlinear returns are plausible even for research problems with long timelines, I’d just guess that the returns are less superlinear, and that it’s harder to increase the number of work hours for deep intellectual work. So I quite strongly agree with your original point.
I like this post, thanks Thomas!
I want to make a comment for maybe newer people especially with some of the uses of the word “EA” here. I’ll take an example to illustrate: “People who are not totally dedicated to EA will...”
I actually think this means (or if it doesn’t, it should mean), “people who are not totally dedicated to impartially maximizing impact as defined under a plausible moral theory [not the point of this to debate which are plausible] will...” or something like that. In other words, “people who are not totally dedicated to the basic principles of EA”.
It doesn’t (or shouldn’t) mean “people who are not totally dedicated to the EA community” or something else that might imply only working at an EA-branded org, only having EA friends, or only working on a cause area that some proportion of EAs think is worthwhile. The EA community is probably a good way to find multipliers and a useful signal for what is valuable, but it is not the final goal at all and doesn’t have all the answers.
I could imagine some case in which it makes sense to do something “less EA” (in the sense that fewer people in EA think it’s valuable) because it’s actually “more EA” (in the sense that it’s actually more valuable for maximizing impact). The point of this example isn’t to establish how likely this is, just to point out that the final goal is maximizing impact, not EA the community, and that “more EA/less EA” is a bit ambiguous.
This might be totally obvious to most readers of this comment, but I wanted to write it anyway just in case there are people who don’t find it obvious (or it isn’t at all obvious, or not what Thomas meant).
Thanks, I made a minor wording change to clarify.
I think it also applies here (which, by the way, is one of the most thought-provoking and useful parts of this post). I think some alternative phrasing like the below actually might make the point even more self-evident:
“It’s very difficult to take an arbitrary project that you’re excited about for other reasons, and tweak it to make it the most maximally impactful project you could be working on.”
Very nice post. It does seem like two of your points are potentially at odds:
>People who are not totally dedicated to EA will make some concession to other selfish or altruistic goals, like having a child, working in academia, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their “EA part” should try much harder to avoid this concession, or find a way to still hit the multiplier.
vs.
>Aiming for the minimum of self-care is dangerous.
It seems the “concessions” could fall under the category of self-care.
Agree—and I would consider adjusting the first of those passages (the one starting with “people who are not totally dedicated to EA”) for such reasons.
all of these concessions except working in academia seem pretty unlikely to result in missing a multiplier, unless they result in working on the wrong project somehow. otherwise they look like efficiency losses, not multiplier losses. in particular, having a child and being tied to a particular location seem especially unlikely to result in loss of a multiplier, at least if you maintain enough savings to still be able to take risks. pursuing fuzzies is more complicated bc it depends how much of your time/money you spend on it, but you could e.g. allocate 10% of your altruism budget to fuzzies and it would only be a 10% loss.
Some ways that these concessions can lose you >50% of your impact:
Having a child makes simultaneously founding a startup really hard (edit: and can anchor your family to a specific location)
Working in academia can force you to spend >50% of your effort researching unimportant problems as a grad student, playing politics, writing grants and such; it also has benefits but your research won’t always benefit from them so in the worst case this eats >50% of your impact
If you prioritize AI safety, and think most good AI safety research happens at places like Redwood, MIRI, Anthropic, CHAI, etc., living in the CA Bay Area can be 2x better than living anywhere else
If you prioritize US policy, living in DC can be >2x better than living anywhere else
Allocating 10% of your altruism budget to fuzzies is a good plan, and I’m mostly worried about people trying to get fuzzies in ways that are much more costly for impact. For instance, EA student groups being optimized for being a “thriving community” rather than having a good theory of change, or someone earning-to-give so that they can donate for fuzzies rather than doing direct work that’s much more impactful.
I know lots of people who are incredibly impactful and are parents and/or work in academia. For many, career choices such as academia are a good route to impact. For many, having children is a core part of leading a good life for them and (to take a very narrow lens) is instrumentally important to their productivity
So I find those claims false, and find it very odd to describe those choices as “concession[s] to other selfish or altruistic goals”. We shouldn’t be implying “maximising your impact (and by implication being a good EA) is hard to make compatible with having a kid”—that’s a good way to be a tiny, weird and shrinking niche group. I found that bullet point particularly jarring and off-putting (and imagine many others would also) - especially as I work in academia and am considering having a child. This was a shame as much of the rest of the post was very useful and interesting.
Thanks for this comment, I made minor edits to that point clarifying that academia can be good or bad.
First off, I think we should separate concerns of truth from those of offputtingness, and be clear about which is which. With that said, I think “concession to other selfish or altruistic goals” is true to the best of my knowledge. Here’s a version of it that I think about, which is still true but probably less offputting, and could have been substituted for that bullet point if I were more careful and less concise:
When your goal is to maximize impact, but parts of you want things other than maximizing impact, you must either remove these parts or make some concession to satisfy them. Usually stamping out a part of yourself is impossible or dangerous, so making some concession is better. Some of these concessions are cheap (from an impact perspective), like donating 2% of your time to a cause you have personal connection to rather than the most impactful one. Some are expensive in that they remove multipliers and lose >50% of your impact, like changing your career from researching AI safety to working at Netflix because your software engineer friends think AI safety is weird. Which multipliers are cheap vs expensive depends on your situation; living in a particular location can be free if you’re a remote researcher for Rethink Priorities but expensive if by far the best career opportunity for you is to work in a particular biosecurity lab. I want to caution people against making an unnecessarily expensive concession, or making a cheap concession much more expensive than necessary. Sometimes this means taking resources away from your non-EA goals, but it does not mean totally ignoring them.
Regarding having a child, I’m not an expert or a parent, but my impression is it’s rare for having kids to actually create more impact than not having the desire in the first place. I vaguely remember Julia Wise having children due to some combination of (a) non-EA goals, and (b) not having kids would make her sad, potentially reducing productivity. In this case, the impact-maximizer would say that (a) is fine/unavoidable—not everyone is totally dedicated to impact—and (b) means that being sad is a more costly concession than not having kids, so having kids is the least costly concession available. Maybe for some, having kids makes life meaningful and gives them something to fight for in the world, which would increase their impact. But I haven’t met any such people.
It’s possible to have non-impact goals that actually increase your impact. Some examples are being truth-seeking, increasing your status in the EA community, or not wanting to let down your EA friends/colleagues. But I have two concerns with putting too much emphasis on this. First, optimizing too hard for this other goal has Goodheart concerns: there are selfish rationalists, EAs who add to an echo chamber, and people who stay on projects that aren’t maximally impactful. Second, the idea that we can directly optimize for impact is a core EA intuition, and focusing on noncentral cases of other goals increasing impact might distract from this. I think it’s better to realize that most of us are not pure impact-maximizers, we must make concessions to other goals, and that which concessions we make is extremely important to our impact.
This doesn’t seem like much evidence one way or the other unless you can directly observe or infer the counterfactual.
If you take OP at face value, you’re traversing at least 6-7 OOMs within choices that can be made by the same individual, so it seems very plausible that someone can be observed to be extremely impactful on an absolute scale while still operating at only 10% of their personal best, or less. (also there is variance in impact across people for hard-to-control reasons, for example intelligence or nationality).
If you prioritize US policy, being a permanent resident of a state and living in DC temporarily makes sense. But living permanently in DC forecloses an entire path through which you could have impact, i.e. getting elected to federal office. Maybe that’s the right choice if you are a much much better fit for appointed jobs than elected ones, or if you have a particularly high-impact appointed job where you know you can accomplish more than you could in Congress. But on net I would expect being a permanent resident of DC to reduce most people’s policy impact (as does being unwilling to move to DC when called upon to do so).
This is great—thanks for writing it.
Sam Bankman-Fried and Rob Wiblin discusses this general idea on the 80,000 Hours podcast:
Great post! My impression is that this is broadly right, and sometimes underappreciated. (Though I’m not sure about your quantitative bottom line for the reasons Darius mentions.)
I think this also has implications for the allocation of resources at a community level because impact often is not only the product of decisions that are under a single person’s control but also of environmental factors – e.g., the number of potential supporters (employees, funders, …), the risk of a mental health crisis, and the number of valuable ideas one encounters in conversation all range over several orders of magnitude depending on one’s circumstances and their value interacts with the other factors (if you charity implements an ineffective intervention, it doesn’t matter if you meet lots of people who give you productivity advice or who are willing to work for you, etc.).
So the upshot is not just that as individuals we need to make the right call on lots of decisions if we want to maximize impact, it’s also that we need to structure the community in such a way that we ‘match’ different ‘factors of production’ in an optimal way with each other – the right people need to find each other, the right ideas, funding, advice, an environment allowing for peak and sustainable motivation, etc. – because we’ll only get the impact ‘super hits’ in cases where all input factors are set to near-maximal levels.
(I made similar points here.)
One can’t stack the farmed animal welfare multiplier on top of the ones about giving malaria nets or the one about focusing on developing countries, right? E.g. can’t give chickens malaria nets.
It seems like that one requires ‘starting from scratch’ in some sense. There might be analogies to the human case (e.g. don’t focus on your pampered pets), but they still need to be argued.
So I think the final number should be lower. (It’s still quite high, of course!)
The way I framed it was unclear, but the final number is correct because I was comparing the QALYs/$ of farmed animal interventions to that of malaria nets. See the footnote:
I was directly comparing the following rough estimates
40 chicken QALY/$ generated by broiler and cage-free interventions (Rethink Priorities has a mean of 41)
0.01 human QALY/$ generated by malaria nets from AMF based on GiveWell data (life expectancy ~60 years divided by $6000/life saved)
400 chicken QALY ~= 1 human QALY if we weight by neurons. Humans have about 86 billion neurons, the red junglefowl (ancestor of chickens) has 221 million, which is a ratio of 389.
40 / (0.01 * 400) gives you a multiplier of 10.
Those are some expensive socks! (or Ana has a lot of colleagues!)
I enjoyed this comment
Nice post!
One possibly obvious implication that I think is missing: when processes are multiplicative rather than additive, it is much more important to avoid zeros at some point in the process. There’s an analogy here with (e.g.) the O-ring theory of economic development.
I think I feel a bit confused with the concept of it being crucial to “stack a bunch of multipliers.”
Like how would you describe this story:
A student is planning a career working on US economic policy (because this is something he’s had an interest in for a little while). Then, he is exposed to some ideas in longtermism and decides to instead work on AI policy instead (because he thinks it seems like a big deal and is probably more important than economic policy).
This feels to me like it’s just “one step” or “one multiplier” that puts this student on a much higher EV career path.
Have longtermists abandoned the ITC framework? It’s all about importance, with no attention to tractability or crowdedness?
The ITC framework is correct. I meant to say that for longtermist interventions, importance tops out way higher than for neartermist interventions.
But it doesn’t follow that because importance is very high, you should “aim for” longtermist interventions. You still have to account for tractability and crowdedness.