We can either become a movement of people who seem dedicated to a particular set of conclusions about the world, or we can become a movement of people united by a shared commitment to using reason and evidence to do the most good we can.
The former is a much smaller group, easier to coordinate our focus, but it’s also a group that’s more easily dismissed. People might see us as a bunch of nerds[1] who have read too many philosophy papers[2] and who are out of touch with the real world.
The latter is a much bigger group.
I’m aware that this is not exactly the central thrust of the piece, but I’d be interested if you could expand on why we might expect the former to be a smaller group than the latter.
I agree that a “commitment to using reason and evidence to do the most good we can” is a much better target to aim for than “dedicated to a particular set of conclusions about the world”. However, my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first. I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.
In fact, I think that it’s harder to get a very big (or very fast-growing) set of people to do the “reason and evidence” thing well. I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.
I am very keen for EA to be about the “reason and evidence” thing, rather than about specific answers. But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.
I think that it’s harder very big (or very fast-growing) set of people to do the “reason and evidence” thing well. I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.
I agree with this. I think it’s even harder to build a community that reasons well together when we come across dogmatically (and we risk cultivating an echo chamber).
Note: I do want to applaud a lot of recent work that CEA-core team are doing to avoid this, the updates to effectivealtruism.org for example have helped!.
I am very keen for EA to be about the “reason and evidence” thing, rather than about specific answers. But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.
A couple of things here:
Firstly, 30% /year is pretty damn fast by most standards!
Secondly, I agree that being thoughtful is essential (that’s a key part of my central claim!).
Thirdly, some of the rate of growth is within “our” control (e.g. CEA can control how much it invests in certain community building activities). However, a lot of things aren’t. People are noticing as we ramp up activities labelled EA or even losely associated with EA.
For example, to avoid growing faster than 30% /year should someone say to Will and the team promoting WWOTF to not pull back on the promotion? What about to SBF to not support more candidates or scaling up FTX Future Fund? Should we not promote EA to new donors/GWWC members? Should GiveWell stop scaling up?
If anything associated with EA grows, it’ll trickle through to more people discovering it.
I think we need to expect that it’s not entirely within our control and to act thoughtfully in light of this.
Agree that echo chamber/dogmatism is also a major barrier to epistemics!
“30% seems high by normal standards”—yep, I guess so. But I’m excited about things like GWWC trying to grow much faster than 30%, and I think that’s possible.
Agree it’s not fully within our control, and that we might not yet be hitting 30%. I think that if we’re hitting >35% annual growth, I would begin to favour cutting back on certain sorts of outreach efforts or doing things like increasing the bar for EAG. I wouldn’t want GW/GWWC to slow down, but I would want you to begin to point fewer people to EA (at least temporarily, so that we can manage the growth). [Off the cuff take, maybe I’d change my mind on further reflection.]
my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first. I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.
I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
However, when we are selling that we’re “commitment to using reason and evidence to do the most good we can” and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value.
I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.
People are hearing about EA. These people could become allies or members of the community and/or our causes. However, if we present ourselves too narrowly we might not just lose them, but they might become adversaries.
I’ve seen this already. People soured on EA because if it seeming too narrow and too overconfident becoming increasingly adversarial and that hurting our overall goals of improving the world.
I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
I agree! That’s why I’m surprised by the initial claim in the article, which seems to be saying that we’re more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us. It seems like the two should be flipped?
Sorry if the remainder of the comment didn’t communicate this clearly enough:
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
A core part of the differing intuitions might be because we’re thinking about two different timescales.
It seems intuitively right to me that the “dedicated to a particular set of conclusions about the world” version of effective altruism will grow faster in the short term. I think this might be because conclusions require less nuanced communication, and being more concrete there are more concrete actions to take that can get people on board faster.
I also have the intuition that a “commitment to using reason and evidence to do the most good we can” (I’d maybe add, “with some proportion of our resources”) has the potential to have a larger backing in the long-term.
I have done a terrible “paint” job (literally used paint) in purple on one of the diagrams in this post to illustrate what I mean:
There are movement building strategies that end us up on the grey line, which gives us faster growth in the short term (so a bigger tent for a while), but doesn’t change our saturation point (we’re still at saturation point 1).
I think that a “broad spectrum of ideas” might mean our end saturation point is higher even if this might require slower growth in the near term. I’ve illustrated this as the purple line which ends up being bigger in the end, at saturation point 2, even if in the short term, growth is slower. In this sense, we will be smaller tent for a while, but we have the potential to end up as a bigger tent in some terminal equilibrium.
An example of a “movement” that had a vaguer, bigger picture idea that got so big it was too commonplace to be a movement might be “the scientific method”?
I think “large groups that reason together on how to achieve some shared values” is something that’s so common, that we ignore it. Examples can be democratic countries, cities, communities.
Not that this means reasoning about being effective can attract as large a group. But one can hope.
I’m aware that this is not exactly the central thrust of the piece, but I’d be interested if you could expand on why we might expect the former to be a smaller group than the latter.
I agree that a “commitment to using reason and evidence to do the most good we can” is a much better target to aim for than “dedicated to a particular set of conclusions about the world”. However, my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first. I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.
+1 to this.
In fact, I think that it’s harder to get a very big (or very fast-growing) set of people to do the “reason and evidence” thing well. I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.
I am very keen for EA to be about the “reason and evidence” thing, rather than about specific answers. But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.
I agree with this. I think it’s even harder to build a community that reasons well together when we come across dogmatically (and we risk cultivating an echo chamber).
Note: I do want to applaud a lot of recent work that CEA-core team are doing to avoid this, the updates to effectivealtruism.org for example have helped!.
A couple of things here:
Firstly, 30% /year is pretty damn fast by most standards!
Secondly, I agree that being thoughtful is essential (that’s a key part of my central claim!).
Thirdly, some of the rate of growth is within “our” control (e.g. CEA can control how much it invests in certain community building activities). However, a lot of things aren’t. People are noticing as we ramp up activities labelled EA or even losely associated with EA.
For example, to avoid growing faster than 30% /year should someone say to Will and the team promoting WWOTF to not pull back on the promotion? What about to SBF to not support more candidates or scaling up FTX Future Fund? Should we not promote EA to new donors/GWWC members? Should GiveWell stop scaling up?
If anything associated with EA grows, it’ll trickle through to more people discovering it.
I think we need to expect that it’s not entirely within our control and to act thoughtfully in light of this.
Agree that echo chamber/dogmatism is also a major barrier to epistemics!
“30% seems high by normal standards”—yep, I guess so. But I’m excited about things like GWWC trying to grow much faster than 30%, and I think that’s possible.
Agree it’s not fully within our control, and that we might not yet be hitting 30%. I think that if we’re hitting >35% annual growth, I would begin to favour cutting back on certain sorts of outreach efforts or doing things like increasing the bar for EAG. I wouldn’t want GW/GWWC to slow down, but I would want you to begin to point fewer people to EA (at least temporarily, so that we can manage the growth). [Off the cuff take, maybe I’d change my mind on further reflection.]
Are there estimates about current or previous growth rates?
There are some, e.g. here.
I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
However, when we are selling that we’re “commitment to using reason and evidence to do the most good we can” and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value.
I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.
People are hearing about EA. These people could become allies or members of the community and/or our causes. However, if we present ourselves too narrowly we might not just lose them, but they might become adversaries.
I’ve seen this already. People soured on EA because if it seeming too narrow and too overconfident becoming increasingly adversarial and that hurting our overall goals of improving the world.
I agree! That’s why I’m surprised by the initial claim in the article, which seems to be saying that we’re more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us. It seems like the two should be flipped?
Sorry if the remainder of the comment didn’t communicate this clearly enough:
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
Does that help clarify?
A core part of the differing intuitions might be because we’re thinking about two different timescales.
It seems intuitively right to me that the “dedicated to a particular set of conclusions about the world” version of effective altruism will grow faster in the short term. I think this might be because conclusions require less nuanced communication, and being more concrete there are more concrete actions to take that can get people on board faster.
I also have the intuition that a “commitment to using reason and evidence to do the most good we can” (I’d maybe add, “with some proportion of our resources”) has the potential to have a larger backing in the long-term.
I have done a terrible “paint” job (literally used paint) in purple on one of the diagrams in this post to illustrate what I mean:
There are movement building strategies that end us up on the grey line, which gives us faster growth in the short term (so a bigger tent for a while), but doesn’t change our saturation point (we’re still at saturation point 1).
I think that a “broad spectrum of ideas” might mean our end saturation point is higher even if this might require slower growth in the near term. I’ve illustrated this as the purple line which ends up being bigger in the end, at saturation point 2, even if in the short term, growth is slower. In this sense, we will be smaller tent for a while, but we have the potential to end up as a bigger tent in some terminal equilibrium.
An example of a “movement” that had a vaguer, bigger picture idea that got so big it was too commonplace to be a movement might be “the scientific method”?
I think “large groups that reason together on how to achieve some shared values” is something that’s so common, that we ignore it. Examples can be democratic countries, cities, communities.
Not that this means reasoning about being effective can attract as large a group. But one can hope.