Very nice post. It does seem like two of your points are potentially at odds:
>People who are not totally dedicated to EA will make some concession to other selfish or altruistic goals, like having a child, working in academia, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their “EA part” should try much harder to avoid this concession, or find a way to still hit the multiplier.
Agree—and I would consider adjusting the first of those passages (the one starting with “people who are not totally dedicated to EA”) for such reasons.
all of these concessions except working in academia seem pretty unlikely to result in missing a multiplier, unless they result in working on the wrong project somehow. otherwise they look like efficiency losses, not multiplier losses. in particular, having a child and being tied to a particular location seem especially unlikely to result in loss of a multiplier, at least if you maintain enough savings to still be able to take risks. pursuing fuzzies is more complicated bc it depends how much of your time/money you spend on it, but you could e.g. allocate 10% of your altruism budget to fuzzies and it would only be a 10% loss.
Some ways that these concessions can lose you >50% of your impact:
Having a child makes simultaneously founding a startup really hard (edit: and can anchor your family to a specific location)
Working in academia can force you to spend >50% of your effort researching unimportant problems as a grad student, playing politics, writing grants and such; it also has benefits but your research won’t always benefit from them so in the worst case this eats >50% of your impact
If you prioritize AI safety, and think most good AI safety research happens at places like Redwood, MIRI, Anthropic, CHAI, etc., living in the CA Bay Area can be 2x better than living anywhere else
If you prioritize US policy, living in DC can be >2x better than living anywhere else
Allocating 10% of your altruism budget to fuzzies is a good plan, and I’m mostly worried about people trying to get fuzzies in ways that are much more costly for impact. For instance, EA student groups being optimized for being a “thriving community” rather than having a good theory of change, or someone earning-to-give so that they can donate for fuzzies rather than doing direct work that’s much more impactful.
I know lots of people who are incredibly impactful and are parents and/or work in academia. For many, career choices such as academia are a good route to impact. For many, having children is a core part of leading a good life for them and (to take a very narrow lens) is instrumentally important to their productivity So I find those claims false, and find it very odd to describe those choices as “concession[s] to other selfish or altruistic goals”. We shouldn’t be implying “maximising your impact (and by implication being a good EA) is hard to make compatible with having a kid”—that’s a good way to be a tiny, weird and shrinking niche group. I found that bullet point particularly jarring and off-putting (and imagine many others would also) - especially as I work in academia and am considering having a child. This was a shame as much of the rest of the post was very useful and interesting.
Thanks for this comment, I made minor edits to that point clarifying that academia can be good or bad.
First off, I think we should separate concerns of truth from those of offputtingness, and be clear about which is which. With that said, I think “concession to other selfish or altruistic goals” is true to the best of my knowledge. Here’s a version of it that I think about, which is still true but probably less offputting, and could have been substituted for that bullet point if I were more careful and less concise:
When your goal is to maximize impact, but parts of you want things other than maximizing impact, you must either remove these parts or make some concession to satisfy them. Usually stamping out a part of yourself is impossible or dangerous, so making some concession is better. Some of these concessions are cheap (from an impact perspective), like donating 2% of your time to a cause you have personal connection to rather than the most impactful one. Some are expensive in that they remove multipliers and lose >50% of your impact, like changing your career from researching AI safety to working at Netflix because your software engineer friends think AI safety is weird. Which multipliers are cheap vs expensive depends on your situation; living in a particular location can be free if you’re a remote researcher for Rethink Priorities but expensive if by far the best career opportunity for you is to work in a particular biosecurity lab. I want to caution people against making an unnecessarily expensive concession, or making a cheap concession much more expensive than necessary. Sometimes this means taking resources away from your non-EA goals, but it does not mean totally ignoring them.
Regarding having a child, I’m not an expert or a parent, but my impression is it’s rare for having kids to actually create more impact than not having the desire in the first place. I vaguely remember Julia Wise having children due to some combination of (a) non-EA goals, and (b) not having kids would make her sad, potentially reducing productivity. In this case, the impact-maximizer would say that (a) is fine/unavoidable—not everyone is totally dedicated to impact—and (b) means that being sad is a more costly concession than not having kids, so having kids is the least costly concession available. Maybe for some, having kids makes life meaningful and gives them something to fight for in the world, which would increase their impact. But I haven’t met any such people.
It’s possible to have non-impact goals that actually increase your impact. Some examples are being truth-seeking, increasing your status in the EA community, or not wanting to let down your EA friends/colleagues. But I have two concerns with putting too much emphasis on this. First, optimizing too hard for this other goal has Goodheart concerns: there are selfish rationalists, EAs who add to an echo chamber, and people who stay on projects that aren’t maximally impactful. Second, the idea that we can directly optimize for impact is a core EA intuition, and focusing on noncentral cases of other goals increasing impact might distract from this. I think it’s better to realize that most of us are not pure impact-maximizers, we must make concessions to other goals, and that which concessions we make is extremely important to our impact.
I know lots of people who are incredibly impactful and are parents and/or work in academia
This doesn’t seem like much evidence one way or the other unless you can directly observe or infer the counterfactual.
If you take OP at face value, you’re traversing at least 6-7 OOMs within choices that can be made by the same individual, so it seems very plausible that someone can be observed to be extremely impactful on an absolute scale while still operating at only 10% of their personal best, or less. (also there is variance in impact across people for hard-to-control reasons, for example intelligence or nationality).
If you prioritize US policy, being a permanent resident of a state and living in DC temporarily makes sense. But living permanently in DC forecloses an entire path through which you could have impact, i.e. getting elected to federal office. Maybe that’s the right choice if you are a much much better fit for appointed jobs than elected ones, or if you have a particularly high-impact appointed job where you know you can accomplish more than you could in Congress. But on net I would expect being a permanent resident of DC to reduce most people’s policy impact (as does being unwilling to move to DC when called upon to do so).
Very nice post. It does seem like two of your points are potentially at odds:
>People who are not totally dedicated to EA will make some concession to other selfish or altruistic goals, like having a child, working in academia, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their “EA part” should try much harder to avoid this concession, or find a way to still hit the multiplier.
vs.
>Aiming for the minimum of self-care is dangerous.
It seems the “concessions” could fall under the category of self-care.
Agree—and I would consider adjusting the first of those passages (the one starting with “people who are not totally dedicated to EA”) for such reasons.
all of these concessions except working in academia seem pretty unlikely to result in missing a multiplier, unless they result in working on the wrong project somehow. otherwise they look like efficiency losses, not multiplier losses. in particular, having a child and being tied to a particular location seem especially unlikely to result in loss of a multiplier, at least if you maintain enough savings to still be able to take risks. pursuing fuzzies is more complicated bc it depends how much of your time/money you spend on it, but you could e.g. allocate 10% of your altruism budget to fuzzies and it would only be a 10% loss.
Some ways that these concessions can lose you >50% of your impact:
Having a child makes simultaneously founding a startup really hard (edit: and can anchor your family to a specific location)
Working in academia can force you to spend >50% of your effort researching unimportant problems as a grad student, playing politics, writing grants and such; it also has benefits but your research won’t always benefit from them so in the worst case this eats >50% of your impact
If you prioritize AI safety, and think most good AI safety research happens at places like Redwood, MIRI, Anthropic, CHAI, etc., living in the CA Bay Area can be 2x better than living anywhere else
If you prioritize US policy, living in DC can be >2x better than living anywhere else
Allocating 10% of your altruism budget to fuzzies is a good plan, and I’m mostly worried about people trying to get fuzzies in ways that are much more costly for impact. For instance, EA student groups being optimized for being a “thriving community” rather than having a good theory of change, or someone earning-to-give so that they can donate for fuzzies rather than doing direct work that’s much more impactful.
I know lots of people who are incredibly impactful and are parents and/or work in academia. For many, career choices such as academia are a good route to impact. For many, having children is a core part of leading a good life for them and (to take a very narrow lens) is instrumentally important to their productivity
So I find those claims false, and find it very odd to describe those choices as “concession[s] to other selfish or altruistic goals”. We shouldn’t be implying “maximising your impact (and by implication being a good EA) is hard to make compatible with having a kid”—that’s a good way to be a tiny, weird and shrinking niche group. I found that bullet point particularly jarring and off-putting (and imagine many others would also) - especially as I work in academia and am considering having a child. This was a shame as much of the rest of the post was very useful and interesting.
Thanks for this comment, I made minor edits to that point clarifying that academia can be good or bad.
First off, I think we should separate concerns of truth from those of offputtingness, and be clear about which is which. With that said, I think “concession to other selfish or altruistic goals” is true to the best of my knowledge. Here’s a version of it that I think about, which is still true but probably less offputting, and could have been substituted for that bullet point if I were more careful and less concise:
When your goal is to maximize impact, but parts of you want things other than maximizing impact, you must either remove these parts or make some concession to satisfy them. Usually stamping out a part of yourself is impossible or dangerous, so making some concession is better. Some of these concessions are cheap (from an impact perspective), like donating 2% of your time to a cause you have personal connection to rather than the most impactful one. Some are expensive in that they remove multipliers and lose >50% of your impact, like changing your career from researching AI safety to working at Netflix because your software engineer friends think AI safety is weird. Which multipliers are cheap vs expensive depends on your situation; living in a particular location can be free if you’re a remote researcher for Rethink Priorities but expensive if by far the best career opportunity for you is to work in a particular biosecurity lab. I want to caution people against making an unnecessarily expensive concession, or making a cheap concession much more expensive than necessary. Sometimes this means taking resources away from your non-EA goals, but it does not mean totally ignoring them.
Regarding having a child, I’m not an expert or a parent, but my impression is it’s rare for having kids to actually create more impact than not having the desire in the first place. I vaguely remember Julia Wise having children due to some combination of (a) non-EA goals, and (b) not having kids would make her sad, potentially reducing productivity. In this case, the impact-maximizer would say that (a) is fine/unavoidable—not everyone is totally dedicated to impact—and (b) means that being sad is a more costly concession than not having kids, so having kids is the least costly concession available. Maybe for some, having kids makes life meaningful and gives them something to fight for in the world, which would increase their impact. But I haven’t met any such people.
It’s possible to have non-impact goals that actually increase your impact. Some examples are being truth-seeking, increasing your status in the EA community, or not wanting to let down your EA friends/colleagues. But I have two concerns with putting too much emphasis on this. First, optimizing too hard for this other goal has Goodheart concerns: there are selfish rationalists, EAs who add to an echo chamber, and people who stay on projects that aren’t maximally impactful. Second, the idea that we can directly optimize for impact is a core EA intuition, and focusing on noncentral cases of other goals increasing impact might distract from this. I think it’s better to realize that most of us are not pure impact-maximizers, we must make concessions to other goals, and that which concessions we make is extremely important to our impact.
This doesn’t seem like much evidence one way or the other unless you can directly observe or infer the counterfactual.
If you take OP at face value, you’re traversing at least 6-7 OOMs within choices that can be made by the same individual, so it seems very plausible that someone can be observed to be extremely impactful on an absolute scale while still operating at only 10% of their personal best, or less. (also there is variance in impact across people for hard-to-control reasons, for example intelligence or nationality).
If you prioritize US policy, being a permanent resident of a state and living in DC temporarily makes sense. But living permanently in DC forecloses an entire path through which you could have impact, i.e. getting elected to federal office. Maybe that’s the right choice if you are a much much better fit for appointed jobs than elected ones, or if you have a particularly high-impact appointed job where you know you can accomplish more than you could in Congress. But on net I would expect being a permanent resident of DC to reduce most people’s policy impact (as does being unwilling to move to DC when called upon to do so).