my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first. I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.
I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
However, when we are selling that we’re “commitment to using reason and evidence to do the most good we can” and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value.
I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.
People are hearing about EA. These people could become allies or members of the community and/or our causes. However, if we present ourselves too narrowly we might not just lose them, but they might become adversaries.
I’ve seen this already. People soured on EA because if it seeming too narrow and too overconfident becoming increasingly adversarial and that hurting our overall goals of improving the world.
I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
I agree! That’s why I’m surprised by the initial claim in the article, which seems to be saying that we’re more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us. It seems like the two should be flipped?
Sorry if the remainder of the comment didn’t communicate this clearly enough:
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
However, when we are selling that we’re “commitment to using reason and evidence to do the most good we can” and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value.
I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.
People are hearing about EA. These people could become allies or members of the community and/or our causes. However, if we present ourselves too narrowly we might not just lose them, but they might become adversaries.
I’ve seen this already. People soured on EA because if it seeming too narrow and too overconfident becoming increasingly adversarial and that hurting our overall goals of improving the world.
I agree! That’s why I’m surprised by the initial claim in the article, which seems to be saying that we’re more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us. It seems like the two should be flipped?
Sorry if the remainder of the comment didn’t communicate this clearly enough:
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
Does that help clarify?