[epistemic status—low, probably some element are wrong]
tl;dr - communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war - some of these are much better than others - EA has disputes and resources and it seems likely that there will be a high profile conflict at some point - What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation. In EA these are likely to be significant.
There are EAs who think that the most effective cause area is AI safety. There are EAs who think it’s global dev. These people do not agree, though there can be ways to coordinate between them.
The spat between GiveWell and GiveDirectly is the beginning of this. Once there are disagreements on the scale of $10millions then some of that is gonna be sorted out over twitter. People may badmouth each other and damage the reputation of EA as a whole.
The way around this is to make solving problems easier than creating them. As in a political coalition, people need to have more benefits being inside the movement than outside it.
The EA forum already does good work here, allowing everyone to upvote posts they like.
Here are some other power sharing mechanisms: - a fund where people can either vote on cause areas, expected value, or moral weights, so that it moves based on the community’s values as a whole - a focus on “we disagree, but we respect” looking at how different parts of the community disagree but respect the effort of others - a clear mechanism of bargains, where animal EAs donate to longtermist charities in exchange for longtermists to go vegan and vice versa - some videos from key figures from different parts discussing their disagreements in a kind and human way - “I would change if” a series of posts from people saying what would make them work on different cause areas. How cheap would chicken welfare have to be before Yudkowsky moved to work on it? How cheap would AI safety had to be before it became Singer’s key talking point
Call me a pessimist, but I can’t see how a community managing $50Bn across deeply dividided prioritites will stay chummy without proper dispute resolution systems. And I suggest we should start building them now.
Factional infighting
[epistemic status—low, probably some element are wrong]
tl;dr
- communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war
- some of these are much better than others
- EA has disputes and resources and it seems likely that there will be a high profile conflict at some point
- What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation. In EA these are likely to be significant.
There are EAs who think that the most effective cause area is AI safety. There are EAs who think it’s global dev. These people do not agree, though there can be ways to coordinate between them.
The spat between GiveWell and GiveDirectly is the beginning of this. Once there are disagreements on the scale of $10millions then some of that is gonna be sorted out over twitter. People may badmouth each other and damage the reputation of EA as a whole.
The way around this is to make solving problems easier than creating them. As in a political coalition, people need to have more benefits being inside the movement than outside it.
The EA forum already does good work here, allowing everyone to upvote posts they like.
Here are some other power sharing mechanisms:
- a fund where people can either vote on cause areas, expected value, or moral weights, so that it moves based on the community’s values as a whole
- a focus on “we disagree, but we respect” looking at how different parts of the community disagree but respect the effort of others
- a clear mechanism of bargains, where animal EAs donate to longtermist charities in exchange for longtermists to go vegan and vice versa
- some videos from key figures from different parts discussing their disagreements in a kind and human way
- “I would change if” a series of posts from people saying what would make them work on different cause areas. How cheap would chicken welfare have to be before Yudkowsky moved to work on it? How cheap would AI safety had to be before it became Singer’s key talking point
Call me a pessimist, but I can’t see how a community managing $50Bn across deeply dividided prioritites will stay chummy without proper dispute resolution systems. And I suggest we should start building them now.
By and large I think this aspect is going surprisingly well, largely because people have adopted a “disagree but respect” ethos.
I’m a bit unsure of such a fund—I guess that would pit different cause areas against each other more directly, which could be a conflict framing.
Regarding the mechanism of bargains, it’s a bit unclear to me what problem that solves.