This is not directly responding to your central point about reducing accessibility, but one comment is I think it could be unhelpful to set up the tension as longtermism vs. neartermism.
[Longtermist causes are] more confusing and harder to understand than neartermist causes. AI seems like ridiculous science fiction to most people.
I think this is true of AI (though even it has become way more widely accepted among our target audience), but it’s untrue of pandemic prevention, climate change, nuclear war, great power conflict and improving decision-making (all the other ones).
Climate change is the most popular cause among young people, so I’d say it’s actually a more intuitive starting point than global health.
Likewise, some people find neartermist causes like factory farming (and especially wild animal suffering) very unintutive. (And it’s not obvious that neartermists shouldn’t also work on AI safety..)
I think it would be clearer to talk about highlighting intuitive vs. unintuitive causes in intro materials rather than neartermism vs. longtermism.
I agree there’s probably been a decline in accessibility due to a greater focus on AI (rather than longtermism itself, which could be presented in terms of intuitive causes).
A related issue is existential risk vs. longtermism. That idea that we want to prevent massive disasters is pretty intuitive to people, and The Precipice had a very positive reception in the press. Whereas I agree a more philosophical longtermist approach is more of a leap.
My second comment is I’d be keen to see more grappling with some of the reasons in favour of highlighting weirder causes more and earlier.
For instance, I agree it’s really important for EA to attract people who are very open minded and curious, to keep EA alive as a question. And one way to do that is to broadcast ideas that aren’t widely accepted.
I also think it’s really important for EA to be intellectually honest, and so if many (most?) of the leaders think AI alignment is the top issue, we should be upfront about that.
Similarly, if we think that some causes have ~100x the impact of others, there seem like big costs to not making that very obvious (to instead focus on how you can do more good within your existing cause).
I agree the ‘slow onramp’ strategy could easily turn out better but it seems like there are strong arguments on both sides, and it would be useful to see more attempt to weigh them, ideally with some rough numbers.
I agree it’s really important for EA to attract people who are very open minded and curious, to keep EA alive as a question. And one way to do that is to broadcast ideas that aren’t widely accepted.
To an outsider who might be suspicious that EA or EA-adjacent spaces seem cult-y, or to an insider who might think EAsaredeferringtoomuch, how would EA as a movement do the above and successfully navigate between:
1) an outcome where the goals of maintaining/improving epistemic quality for the EA movement, and keeping EA as a question are attained, and 2) an outcome where EA ends up self-selecting for those who are most likely to defer and embrace “ideas that aren’t widely accepted”, and doesn’t achieve the above goal?
The assumption here is that being perceived as a cult or being a high-deferral community would be abadoutcome, though I guess not everyone would necessarily agree with this.
(Caveat: very recently went down the Leverage rabbit hole, so this is on the front of my mind and might be more sensitive to this than usual.)
if many (most?) of the leaders think AI alignment is the top issue, we should be upfront about that.
Agreed, though RE: “AI alignment is the top issue” I think it’s important to distinguish between whether they think:
AI misalignment is the most likely reason to cause human extinction/cause global suffering (+/- within [X timeframe]).
Donating to AI alignment is the most cost-effective place to donate for all worldviews
Donating to AI alignment is the most cost-effective place to donate for [narrower range of worldviews]
Contributing to direct AI alignment work is the best career decision for all people
Contributing to direct AI alignment work is the best career decision for [narrower range of people].
Prioritising AI alignment is the best way to maximise impact for EA as a movement (on the margin? at scale?)
Do you have a sense of where the consensus falls for those you consider EA leaders?
This was such a great articulation of such a core tension to effective altruism community building.
A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed.
Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for bad ideas with poor justification. Not leaving room for unintelligent-sounding ideas with poor justification selects for people who are most willing to defer. Having room to delve into more tangents off the beaten track of what has already been fleshed out leaves the danger for that side-tracking to be a dead-end (and most tangents will be) and no-one wants to stick their neck out and explore an idea that almost definitely is going nowhere (but should be explored anyway just in case).
Leaving room for ideas that don’t yet sound intelligent is hard to do while still keeping the conversation nuanced (but I think not doing it is even worse).
Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.
E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn’t go directly into AI (and instead should build aptitudes).
Nuanced ideas are harder to spread but also people feeling like they don’t have permission in community spaces (in local groups or on the forum) to say under-developed things means it is much less likely for the off-the-beaten-track stuff that has been mentioned but not fleshed out to come up in conversation (or to get developed further).
This is not directly responding to your central point about reducing accessibility, but one comment is I think it could be unhelpful to set up the tension as longtermism vs. neartermism.
I think this is true of AI (though even it has become way more widely accepted among our target audience), but it’s untrue of pandemic prevention, climate change, nuclear war, great power conflict and improving decision-making (all the other ones).
Climate change is the most popular cause among young people, so I’d say it’s actually a more intuitive starting point than global health.
Likewise, some people find neartermist causes like factory farming (and especially wild animal suffering) very unintutive. (And it’s not obvious that neartermists shouldn’t also work on AI safety..)
I think it would be clearer to talk about highlighting intuitive vs. unintuitive causes in intro materials rather than neartermism vs. longtermism.
I agree there’s probably been a decline in accessibility due to a greater focus on AI (rather than longtermism itself, which could be presented in terms of intuitive causes).
A related issue is existential risk vs. longtermism. That idea that we want to prevent massive disasters is pretty intuitive to people, and The Precipice had a very positive reception in the press. Whereas I agree a more philosophical longtermist approach is more of a leap.
My second comment is I’d be keen to see more grappling with some of the reasons in favour of highlighting weirder causes more and earlier.
For instance, I agree it’s really important for EA to attract people who are very open minded and curious, to keep EA alive as a question. And one way to do that is to broadcast ideas that aren’t widely accepted.
I also think it’s really important for EA to be intellectually honest, and so if many (most?) of the leaders think AI alignment is the top issue, we should be upfront about that.
Similarly, if we think that some causes have ~100x the impact of others, there seem like big costs to not making that very obvious (to instead focus on how you can do more good within your existing cause).
I agree the ‘slow onramp’ strategy could easily turn out better but it seems like there are strong arguments on both sides, and it would be useful to see more attempt to weigh them, ideally with some rough numbers.
To an outsider who might be suspicious that EA or EA-adjacent spaces seem cult-y, or to an insider who might think EAs are deferring too much, how would EA as a movement do the above and successfully navigate between:
1) an outcome where the goals of maintaining/improving epistemic quality for the EA movement, and keeping EA as a question are attained, and
2) an outcome where EA ends up self-selecting for those who are most likely to defer and embrace “ideas that aren’t widely accepted”, and doesn’t achieve the above goal?
The assumption here is that being perceived as a cult or being a high-deferral community would be a bad outcome, though I guess not everyone would necessarily agree with this.
(Caveat: very recently went down the Leverage rabbit hole, so this is on the front of my mind and might be more sensitive to this than usual.)
Agreed, though RE: “AI alignment is the top issue” I think it’s important to distinguish between whether they think:
AI misalignment is the most likely reason to cause human extinction/cause global suffering (+/- within [X timeframe]).
Donating to AI alignment is the most cost-effective place to donate for all worldviews
Donating to AI alignment is the most cost-effective place to donate for [narrower range of worldviews]
Contributing to direct AI alignment work is the best career decision for all people
Contributing to direct AI alignment work is the best career decision for [narrower range of people].
Prioritising AI alignment is the best way to maximise impact for EA as a movement (on the margin? at scale?)
Do you have a sense of where the consensus falls for those you consider EA leaders?
(Commenting in personal capacity etc)
This was such a great articulation of such a core tension to effective altruism community building.
A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed.
Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for bad ideas with poor justification. Not leaving room for unintelligent-sounding ideas with poor justification selects for people who are most willing to defer. Having room to delve into more tangents off the beaten track of what has already been fleshed out leaves the danger for that side-tracking to be a dead-end (and most tangents will be) and no-one wants to stick their neck out and explore an idea that almost definitely is going nowhere (but should be explored anyway just in case).
Leaving room for ideas that don’t yet sound intelligent is hard to do while still keeping the conversation nuanced (but I think not doing it is even worse).
Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.
E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn’t go directly into AI (and instead should build aptitudes).
Nuanced ideas are harder to spread but also people feeling like they don’t have permission in community spaces (in local groups or on the forum) to say under-developed things means it is much less likely for the off-the-beaten-track stuff that has been mentioned but not fleshed out to come up in conversation (or to get developed further).