I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
I agree! That’s why I’m surprised by the initial claim in the article, which seems to be saying that we’re more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us. It seems like the two should be flipped?
Sorry if the remainder of the comment didn’t communicate this clearly enough:
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
I agree! That’s why I’m surprised by the initial claim in the article, which seems to be saying that we’re more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us. It seems like the two should be flipped?
Sorry if the remainder of the comment didn’t communicate this clearly enough:
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
Does that help clarify?