I thought this post raised many points worth pondering, but I am skeptical of the actual suggestions largely because it underrates the benefits of the current setup and neglects the costs. I’ll list my thoughts below:
a) Yeah, the community aspect is worrying in terms of how it distorts people’s incentives, but I believe that we also have to be willing to back ourselves and not risk crippling our effectiveness by optimising too much on minimising downside in the case where we are wrong.
b) Ways of thinking and frameworks of the world are much more than merely a “vibe” or “intellectual aesthetic”.
c) Groupthink can be addressed by other less costly interventions, such the Criticism Red Teaming and Competition. I imagine that we could run other projects in this space, such as providing longer-term funding for people who bring a different perspective to the table. These aren’t perfect, but achieving goals is much easier when you have many like-minded people, so winging too much the other way could cripple us too.
d) I don’t see the EA style as being hard to acquire. However, I agree that it’s important for us to be able to appreciate criticisms written in other styles, as otherwise we’ll learn from others at a much slower rate.
e) I feel that EA is somewhat dropping the ball on special responsibilities at the moment. With our current resources and community size, I think that we could address this without substantially impacting our mission although this might change over the longer-term.
f) I feel that the advantages of a community are vastly underrated by this post. For one, the community provides a vital talent pool. Many people would never have gotten involved in direct work if there wasn’t a lower commitment step than redirected their career that they could take first or a local community running events to help them understand why the cause was important. I suppose we could structure events and activities to minimise the extent to which people become friends, but that would just be a bad community.
g) That said, we should dedicate more effort towards recruiting people with the specific skills that we need. We need more programs like the Legal Priorities Summer Institute or the EA Communicators Fellowship. I’m also bullish on cause-specific movement building to attract people to direct work who care about the specific cause, but who might not vibe with EA.
h) Giving What We Can and GiveWell are spreading the meme ‘when donating keep in mind that some charities are more effective than others’ without positioning it in a broader EA framework. I’m really happy to see this as I think that many people who would never want to be part of the EA community might be persuaded to adopt this meme.
i) “We should not try to get people who care a lot about animal welfare or decreasing global inequality to care about x-risk”—I agree that we want to limit the amount of effort/resources spend trying to poach people from other cause areas, but shifting from one cause are to another could potential lead to orders of magnitude in the impact someone has.
j) EA was pretty skeptical about being too meta-focused at first due to worries that we might lose our focus on direct work, but in retrospect, I suspect that were wrong not to have spent more money on community-building as it seems to have paid dividends in terms of recruiting talent.
k) I’d like to see the section on less internal recruiting engage with the argument that value alignment is actually important. I think that the inevitable result of hiring people from society at large would be a watering down of EA ideas and rather than a marginally less impactful project being pursued, I expect that in many cases this could reduce impact by an order of magnitude from having people pursue the highest impact project that is high status rather than the highest impact project. There’s also significant risk in that once you bring in people who aren’t value-aligned, they bring in more people who aren’t value-aligned and then the whole culture changes. I’m guessing you might not think this is important given that you’ve described it as a “vibe”, but I’d suggest that having the right culture is a key part of achieving high performance.
l) “So for example, AI risk would probably be cut off around now”—While this would free up resources to incubate more causes, this could also be a major blunder if AI risk is an immediate, short-term priority and we need to be moving on this ASAP.
Most of what I’ve written here is criticism, so I wanted to emphasise again that I found the ideas here fascinating and I definitely think reading your post was worth my time :-).
a) I broadly like the idea that “we also have to be willing to back ourselves and not risk crippling our effectiveness by optimising too much on minimising downside in the case where we are wrong”. I would like to note that downgrading the self-directed-investment reduces the need for caution, and so reduces the crippling effect.
j) I think it’s hard to decide how much meta-investment is optimal. You talk about it as if it’s a matter of dialling up or down one parameter (money) though, which I think is not the right way to think about it. The ‘direction’ in which you invest in meta-things also matters a lot. In my ideal world “Doing Good Better” becomes part of the collective meme-space just like “The Scientific Method” has. However, it’s not perfectly obvious which type of investment would lead to such normalisation and adoption.
h) I’m happy to hear Giving What We Can and GiveWell don’t position themselves in the wider EA framework. I’m not very up to date with how (effectively) they are spreading memes.
c) Running an intervention such as the Criticism Red Teaming and Competition is only effective if people can fundamentally change their minds based on submissions. (And don’t just enjoy that we’re all being so open-minded by inviting criticisms or only change their minds about non-core topics.)
f) I agree talent is important. However, I think organising as a community might as well have made us lose out on talent. (This “a local community running events to help them understand why the cause was important” actually gives me some pyramid scheme vibes btw.)
i) I wasn’t talking about poaching. I was more talking about: caring about all EA cause areas should not in any way be a conditional or desired outcome for someone caring about one EA cause area. Re “shifting from one cause are to another could potential lead to orders of magnitude in the impact someone has”: sure, but I think in EA the cost of switching has also been high. It switches all the time what people think is the most impactful area, and skilling up in a new area takes time. If someone works in a useful area, has built up expertise there and the whole area is to their comparative advantage, then it would be Best if they stayed in that area.
k) Here we disagree. I think that within a project there should be value-alignment. However, the people within a project imo do not have to be value-aligned to EA at large.
Re “I’d suggest that having the right culture is a key part of achieving high performance.” I personally think “Doing the thing” and engagement with concrete projects is most important.
I also actually feel like “could reduce impact by an order of magnitude from having people pursue the highest impact project that is high status rather than the highest impact project” is currently partially caused by the EA community being important to people. If people’s primary community is something like a chess club, pub or family, then there are probably loads of ways to increase status in that group that have nothing to do with the content of their job (e.g. getting better at chess, being funny, and being reliable and kind). However, if the status that’s most important to you is whether other EAs think your work is impactful, then you end up with people wanting to work on the hottest topic, rather than doing the most impactful think based on their comparative advantage.
I thought this post raised many points worth pondering, but I am skeptical of the actual suggestions largely because it underrates the benefits of the current setup and neglects the costs. I’ll list my thoughts below:
a) Yeah, the community aspect is worrying in terms of how it distorts people’s incentives, but I believe that we also have to be willing to back ourselves and not risk crippling our effectiveness by optimising too much on minimising downside in the case where we are wrong.
b) Ways of thinking and frameworks of the world are much more than merely a “vibe” or “intellectual aesthetic”.
c) Groupthink can be addressed by other less costly interventions, such the Criticism Red Teaming and Competition. I imagine that we could run other projects in this space, such as providing longer-term funding for people who bring a different perspective to the table. These aren’t perfect, but achieving goals is much easier when you have many like-minded people, so winging too much the other way could cripple us too.
d) I don’t see the EA style as being hard to acquire. However, I agree that it’s important for us to be able to appreciate criticisms written in other styles, as otherwise we’ll learn from others at a much slower rate.
e) I feel that EA is somewhat dropping the ball on special responsibilities at the moment. With our current resources and community size, I think that we could address this without substantially impacting our mission although this might change over the longer-term.
f) I feel that the advantages of a community are vastly underrated by this post. For one, the community provides a vital talent pool. Many people would never have gotten involved in direct work if there wasn’t a lower commitment step than redirected their career that they could take first or a local community running events to help them understand why the cause was important. I suppose we could structure events and activities to minimise the extent to which people become friends, but that would just be a bad community.
g) That said, we should dedicate more effort towards recruiting people with the specific skills that we need. We need more programs like the Legal Priorities Summer Institute or the EA Communicators Fellowship. I’m also bullish on cause-specific movement building to attract people to direct work who care about the specific cause, but who might not vibe with EA.
h) Giving What We Can and GiveWell are spreading the meme ‘when donating keep in mind that some charities are more effective than others’ without positioning it in a broader EA framework. I’m really happy to see this as I think that many people who would never want to be part of the EA community might be persuaded to adopt this meme.
i) “We should not try to get people who care a lot about animal welfare or decreasing global inequality to care about x-risk”—I agree that we want to limit the amount of effort/resources spend trying to poach people from other cause areas, but shifting from one cause are to another could potential lead to orders of magnitude in the impact someone has.
j) EA was pretty skeptical about being too meta-focused at first due to worries that we might lose our focus on direct work, but in retrospect, I suspect that were wrong not to have spent more money on community-building as it seems to have paid dividends in terms of recruiting talent.
k) I’d like to see the section on less internal recruiting engage with the argument that value alignment is actually important. I think that the inevitable result of hiring people from society at large would be a watering down of EA ideas and rather than a marginally less impactful project being pursued, I expect that in many cases this could reduce impact by an order of magnitude from having people pursue the highest impact project that is high status rather than the highest impact project. There’s also significant risk in that once you bring in people who aren’t value-aligned, they bring in more people who aren’t value-aligned and then the whole culture changes. I’m guessing you might not think this is important given that you’ve described it as a “vibe”, but I’d suggest that having the right culture is a key part of achieving high performance.
l) “So for example, AI risk would probably be cut off around now”—While this would free up resources to incubate more causes, this could also be a major blunder if AI risk is an immediate, short-term priority and we need to be moving on this ASAP.
Most of what I’ve written here is criticism, so I wanted to emphasise again that I found the ideas here fascinating and I definitely think reading your post was worth my time :-).
Thanks!
a) I broadly like the idea that “we also have to be willing to back ourselves and not risk crippling our effectiveness by optimising too much on minimising downside in the case where we are wrong”. I would like to note that downgrading the self-directed-investment reduces the need for caution, and so reduces the crippling effect.
j) I think it’s hard to decide how much meta-investment is optimal. You talk about it as if it’s a matter of dialling up or down one parameter (money) though, which I think is not the right way to think about it. The ‘direction’ in which you invest in meta-things also matters a lot. In my ideal world “Doing Good Better” becomes part of the collective meme-space just like “The Scientific Method” has. However, it’s not perfectly obvious which type of investment would lead to such normalisation and adoption.
h) I’m happy to hear Giving What We Can and GiveWell don’t position themselves in the wider EA framework. I’m not very up to date with how (effectively) they are spreading memes.
c) Running an intervention such as the Criticism Red Teaming and Competition is only effective if people can fundamentally change their minds based on submissions. (And don’t just enjoy that we’re all being so open-minded by inviting criticisms or only change their minds about non-core topics.)
f) I agree talent is important. However, I think organising as a community might as well have made us lose out on talent. (This “a local community running events to help them understand why the cause was important” actually gives me some pyramid scheme vibes btw.)
i) I wasn’t talking about poaching. I was more talking about: caring about all EA cause areas should not in any way be a conditional or desired outcome for someone caring about one EA cause area.
Re “shifting from one cause are to another could potential lead to orders of magnitude in the impact someone has”: sure, but I think in EA the cost of switching has also been high. It switches all the time what people think is the most impactful area, and skilling up in a new area takes time. If someone works in a useful area, has built up expertise there and the whole area is to their comparative advantage, then it would be Best if they stayed in that area.
k) Here we disagree. I think that within a project there should be value-alignment. However, the people within a project imo do not have to be value-aligned to EA at large.
Re “I’d suggest that having the right culture is a key part of achieving high performance.” I personally think “Doing the thing” and engagement with concrete projects is most important.
I also actually feel like “could reduce impact by an order of magnitude from having people pursue the highest impact project that is high status rather than the highest impact project” is currently partially caused by the EA community being important to people. If people’s primary community is something like a chess club, pub or family, then there are probably loads of ways to increase status in that group that have nothing to do with the content of their job (e.g. getting better at chess, being funny, and being reliable and kind). However, if the status that’s most important to you is whether other EAs think your work is impactful, then you end up with people wanting to work on the hottest topic, rather than doing the most impactful think based on their comparative advantage.