Effectiveness is a Conjunction of Multipliers

Epistemic status: Not new material, but hopefully points more directly at key EA intuitions.

Ana is a hypothetical junior software engineer in Silicon Valley making $150k/​year. Every year, she spends 10% of her income to anonymously buy socks for her colleagues. Most people would agree that Ana is being altruistic, but not being particularly efficient about it. If utility is logarithmic in income, Ana can 40x her impact by giving the socks to a local homeless person instead who has an income of $5000. But in the EA community, we’ve noticed further multipliers:

  1. 40x: giving socks to local homeless people instead of her colleagues

  2. 10x more: giving socks to the poorest people in the world (income $500) instead of homeless people

  3. 2x more: giving cash (GiveDirectly) instead of socks

  4. 8x more: giving malaria nets rather than cash

  5. 10x more: farmed animal welfare rather than human welfare. [1]

  6. 4x more: working in a more lucrative industry like quant research, working longer hours, and doing salary negotiation to raise her salary to $600k[2]

  7. 8x more: donating 80% instead of 10%

  8. 10x more: taking on risk to shoot for charity entrepreneurship or billionairedom, producing $6M of expected value yearly[3]

Total multiplier: about 20,480,000x[4]

I think that many people new to EA have heard that multipliers like these exist, but don’t really internalize that all of these multipliers stack multiplicatively. If Ana hits all of these bonuses, she will have a direct impact 20,480,000 times larger than giving socks to random colleagues. If she misses one of these multipliers, say the last one, Ana will still have a direct impact 2,048,000 times larger than with the initial socks plan. This sounds good until you realize that Ana is losing out on 90% of her potential impact, consigning literally millions of chickens to an existence worse than death. To get more than 50% of her maximum possible impact, Ana must hit every single multiplier. This is one way that reality is unforgiving.

Multipliers result from judgment, ambition, and risk

  • Good judgment: responsible for multipliers (1) through (4), making the impact 80,000 times larger, and is implicit in (8) too, because going through with a bad for-profit or charity startup idea could be of zero or even negative value.

  • Ambition: responsible for multipliers (6) through (8), making her expected impact 320x larger.

  • Willingness to take on risk is mostly relevant in (8), though you could think of (5) as having risk from moral uncertainty.

This example is neartermist to make the numbers more concrete, but the same principles apply within longtermism. For a longtermist, good judgment and ambition are even more critical. It’s difficult to tell the difference between a project that reduces existential risk by 0.02%, a project that reduces x-risk 0.002%, and a worthless project, so you need excellent judgment to get within 50% of your maximum impact. Ambition is in some sense what longtermism is all about—longtermist causes have a huge multiplier resulting from astronomically larger scale and (longtermists argue) only somewhat worse tractability. And taking on risk allows hits-based giving whether in neartermism or longtermism.

More generally, actions, especially complicated actions and research directions, are an extremely high-dimensional space. If actions are vectors and the goodness of an action is its cosine similarity to the best action, and your action is 90% as good as the optimum (25° off the best path) in each of 50 orthogonal directions, the amount of good you do is capped at 0.9^50 = 0.005x the maximum.


  • It’s very difficult to take an arbitrary project that you’re excited about for other reasons, and tweak it to “make it EA”[5]. An arbitrary project will have zero or one of these multipliers, and making it hit seven or eight more multipliers will often make it unrecognizable.

  • People who are not totally dedicated to maximizing impact will make some concession to other selfish or altruistic goals, like having a child, working in whichever of (academia, industry, other) is most comfortable, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their “EA part” should try much harder to make a less costly concession instead, or find a way to still hit the multiplier.

  • It’s more important to have good judgment than to dedicate 100% of your life to an EA project. If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours. But if bad judgment causes you to miss one or two multipliers, you could make less than 10% of your maximum impact. (But note that working really hard can sometimes enable multipliers—see this comment by Mathieu Putz.)

  • Aiming for the minimum of self-care is dangerous.

  • Information is extremely valuable when it determines if you can apply a multiplier. For example, Ana should probably spend a year deciding whether she’s a good fit for charity entrepreneurship, or thinking about whether her moral circle includes chickens, but not spend a year choosing between two careers that have similar impact. Networking is a special case of information.

  • Finding multipliers is hard, so most people in the EA community (likely including me) are missing at least one multiplier, and consequently in some sense doing less than 50% the good they could be.

  1. ^

    Assumes 40 chicken QALYs/​$, 1 human QALY/​$100, and that 400 chicken QALY = 1 human QALY due to neuron differences. Ana’s moral circle includes all beings weighted by neuron count, but she hadn’t thought about this enough.

  2. ^

    As of 2022, typical pay for great quant researchers with a couple of years of experience, or great developers with a few years of experience.

  3. ^

    Ana is in theory ambitious and skilled enough to start a charity or tech startup, but she hasn’t heard of Charity Entrepreneurship yet.

  4. ^

    Could be off by 10x in either direction, but doesn’t affect my core point.

  5. ^

    “make it EA” = “make it one of the highest-impact things you could be doing”, not “make the EA community approve of it”