Part Two: Opinions and diversity is large and harder to manage
At great effort, and with great value, I think the leadership of the EAA or FAW community has worked hard to be a big tent and accommodate a huge number of different viewpoints and people.
The viewpoints accommodated are larger than any other EA cause area. At the same time, these viewpoints seem to be accommodated without getting in anyone’s hair or stopping work.
Just to be clear, by big tent, this means accommodating different schools or factions that have been opposed to FAW in the past.
To calibrate on what is being achieved:
Imagine if anti-aid activists and anti-futurists, anti-technologists were accommodated and given a platform in global health and longtermism respectively.
Imagine that, in addition to AGI/ASI concerns and takeoff along the lines of MIRI or ARC, the EA AI safety community also accommodated prosaic AI-safety regulations and had to listen to those too.
If this sounds hard. Yes, yes, it is.
But there’s more. Another viewpoint or worldview currently accommodated is what LessWrong or EA Forum calls “social justice”. Related rhetoric and forms of activism that probably wouldn’t be acceptable to many some EAs exist in the community there.
So what this means is that it adds challenge to a new forum. Once again, you can write a giant essay, but the truth is that the patterns/norms of discourse in LessWrong and EA Forum produces conformity and filtering.
So, partially because of the differences in norms in a new forum (described in Part One above), it’s possible these differences can clash in a new animal welfare forum.
This requires management. It’s hard to easily communicate what this management looks like, and this comment is long enough.
To succinctly motivate this, imagine one scenario:
The new forum is seen as a performative space for the canonization of acceptable approaches and attitudes. Management is sort of passive and effectively takes the path of least resistance.
The result is that certain factions who are willing to use aggressive and sophisticated tactics and rhetoric try to occupy the space to advance their agenda. This activism isn’t unnoticed by others, and fighting occurs.
Eventually, leadership takes action to moderate the conflict, but harm is done, and it’s hard to bring the forum back to a more open style of discourse. It would have been much better to have stronger, active moderation and leadership at the beginning, even though this was opposed (and largely illegible and not understood) by people used to EA Forum or LessWrong .
Part Two: Opinions and diversity is large and harder to manage
At great effort, and with great value, I think the leadership of the EAA or FAW community has worked hard to be a big tent and accommodate a huge number of different viewpoints and people.
The viewpoints accommodated are larger than any other EA cause area. At the same time, these viewpoints seem to be accommodated without getting in anyone’s hair or stopping work.
Just to be clear, by big tent, this means accommodating different schools or factions that have been opposed to FAW in the past.
To calibrate on what is being achieved:
Imagine if anti-aid activists and anti-futurists, anti-technologists were accommodated and given a platform in global health and longtermism respectively.
Imagine that, in addition to AGI/ASI concerns and takeoff along the lines of MIRI or ARC, the EA AI safety community also accommodated prosaic AI-safety regulations and had to listen to those too.
If this sounds hard. Yes, yes, it is.
But there’s more. Another viewpoint or worldview currently accommodated is what LessWrong or EA Forum calls “social justice”. Related rhetoric and forms of activism that probably wouldn’t be acceptable to many some EAs exist in the community there.
So what this means is that it adds challenge to a new forum. Once again, you can write a giant essay, but the truth is that the patterns/norms of discourse in LessWrong and EA Forum produces conformity and filtering.
So, partially because of the differences in norms in a new forum (described in Part One above), it’s possible these differences can clash in a new animal welfare forum.
This requires management. It’s hard to easily communicate what this management looks like, and this comment is long enough.
To succinctly motivate this, imagine one scenario:
The new forum is seen as a performative space for the canonization of acceptable approaches and attitudes. Management is sort of passive and effectively takes the path of least resistance.
The result is that certain factions who are willing to use aggressive and sophisticated tactics and rhetoric try to occupy the space to advance their agenda. This activism isn’t unnoticed by others, and fighting occurs.
Eventually, leadership takes action to moderate the conflict, but harm is done, and it’s hard to bring the forum back to a more open style of discourse. It would have been much better to have stronger, active moderation and leadership at the beginning, even though this was opposed (and largely illegible and not understood) by people used to EA Forum or LessWrong .