This was such a great articulation of such a core tension to effective altruism community building.
A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed.
Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for bad ideas with poor justification. Not leaving room for unintelligent-sounding ideas with poor justification selects for people who are most willing to defer. Having room to delve into more tangents off the beaten track of what has already been fleshed out leaves the danger for that side-tracking to be a dead-end (and most tangents will be) and no-one wants to stick their neck out and explore an idea that almost definitely is going nowhere (but should be explored anyway just in case).
Leaving room for ideas that don’t yet sound intelligent is hard to do while still keeping the conversation nuanced (but I think not doing it is even worse).
Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.
E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn’t go directly into AI (and instead should build aptitudes).
Nuanced ideas are harder to spread but also people feeling like they don’t have permission in community spaces (in local groups or on the forum) to say under-developed things means it is much less likely for the off-the-beaten-track stuff that has been mentioned but not fleshed out to come up in conversation (or to get developed further).
This was such a great articulation of such a core tension to effective altruism community building.
A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed.
Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for bad ideas with poor justification. Not leaving room for unintelligent-sounding ideas with poor justification selects for people who are most willing to defer. Having room to delve into more tangents off the beaten track of what has already been fleshed out leaves the danger for that side-tracking to be a dead-end (and most tangents will be) and no-one wants to stick their neck out and explore an idea that almost definitely is going nowhere (but should be explored anyway just in case).
Leaving room for ideas that don’t yet sound intelligent is hard to do while still keeping the conversation nuanced (but I think not doing it is even worse).
Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.
E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn’t go directly into AI (and instead should build aptitudes).
Nuanced ideas are harder to spread but also people feeling like they don’t have permission in community spaces (in local groups or on the forum) to say under-developed things means it is much less likely for the off-the-beaten-track stuff that has been mentioned but not fleshed out to come up in conversation (or to get developed further).