In practice, and for the EA community in particular, I think there are some reasons why the collective action problem isn’t quite as bad as it may seem. For instance, with diminishing marginal returns on causes, the most efficient allocation will be a portfolio of interventions with weights roughly proportional to how much people care on average. But something quite similar can also happen in the non-cooperative equilibrium for some diversity of actors who all support the cause they’re most excited about. (Maybe this is similar to case D in your analysis.)
Can you point to examples of concrete EA causes that you think get too much or too little resources due to these collective action problems?
Thanks for the comment. First, I’d like to point out that I think there’s a good chance that the collective action problem within EA isn’t so bad because, as I mentioned in the post, there has been a fairly large emphasis on cooperating with others within EA. It’s when interacting with people outside of EA that I think we’re acting non-cooperatively.
However, it’s still worth discussing whether there are major unsolved collective action problems within EA. I’ll give some possible examples here, but note that I’m very unsure about many of these examples. First, here are some causes which I think benefit EAs of many different value systems and are thus would be underfunded if people were acting non-cooperatively:
1. General infrastructure including the EA forum, EA funds or EA global. This also would include the mechanisms for cooperation which I mentioned in the post. All of these things are like public goods in that that they probably benefit nearly every value system within EA. If true, this also means that the “EA meta fund” may be the most public good-like of the four EA funds.
2. The development of informal norms within the community (like being nice, not overly-stating or making misleading arguments, cooperating with others). The development and maintenance of these norms also seems to be a public good which benefits all value systems.
3. (this is the most speculative one) more long-term oriented approaches to near-term EA cause areas. An example is approaches to global development which involve building better and lasting political institutions (see this forum post). This may represent a kind of compromise between some long-termist EAs (who may normally donate to AI safety) and global development EAs (who would normally donate to short-term development initiatives like AMF).
And here are some causes which I think are viewed as harmful by some value systems and thus would be overfunded if people acted non-cooperatively:
1. Advocacy efforts to convince people to convert from other EA cause areas to your own. As I mentioned in the post, these can be valued negatively by other value systems.
2. Causes which increase (or decrease) the population. Some people disagree on whether creating more lives is on average good or bad (for example, some suffering-focused EAs may think that creating more human lives is good. Conversely, some people may think that creating more farm animal lives is on average good). This means that causes which increase (decrease) the population will be viewed as harmful by those who view population increases (decreases) as bad. Brian Tomasik’s example at the end of this post is along those lines.
So, in general, I don’t think I agree that the EA community is likely to not have major collective action problems. It seems more likely, though, that EA has solved most of its internal collective action problems through emphasizing cooperation.
One more example to add here of a cause which may be like a “public good” within the EA community: promoting international cooperation. Many important causes are global public goods (that is, causes which benefit the whole world and thus any one nation has an incentive to free-ride on other nations’ contributions), including global poverty, climate change, x-risk reduction, and animal welfare. I know that FHI already has some research on building international cooperation. I would guess that some EAs who primarily give to global poverty would be willing to shift funding towards building international cooperation if some EAs who normally give to AI safety do the same.
Interesting, thanks for writing this up!
In practice, and for the EA community in particular, I think there are some reasons why the collective action problem isn’t quite as bad as it may seem. For instance, with diminishing marginal returns on causes, the most efficient allocation will be a portfolio of interventions with weights roughly proportional to how much people care on average. But something quite similar can also happen in the non-cooperative equilibrium for some diversity of actors who all support the cause they’re most excited about. (Maybe this is similar to case D in your analysis.)
Can you point to examples of concrete EA causes that you think get too much or too little resources due to these collective action problems?
Thanks for the comment. First, I’d like to point out that I think there’s a good chance that the collective action problem within EA isn’t so bad because, as I mentioned in the post, there has been a fairly large emphasis on cooperating with others within EA. It’s when interacting with people outside of EA that I think we’re acting non-cooperatively.
However, it’s still worth discussing whether there are major unsolved collective action problems within EA. I’ll give some possible examples here, but note that I’m very unsure about many of these examples. First, here are some causes which I think benefit EAs of many different value systems and are thus would be underfunded if people were acting non-cooperatively:
1. General infrastructure including the EA forum, EA funds or EA global. This also would include the mechanisms for cooperation which I mentioned in the post. All of these things are like public goods in that that they probably benefit nearly every value system within EA. If true, this also means that the “EA meta fund” may be the most public good-like of the four EA funds.
2. The development of informal norms within the community (like being nice, not overly-stating or making misleading arguments, cooperating with others). The development and maintenance of these norms also seems to be a public good which benefits all value systems.
3. (this is the most speculative one) more long-term oriented approaches to near-term EA cause areas. An example is approaches to global development which involve building better and lasting political institutions (see this forum post). This may represent a kind of compromise between some long-termist EAs (who may normally donate to AI safety) and global development EAs (who would normally donate to short-term development initiatives like AMF).
And here are some causes which I think are viewed as harmful by some value systems and thus would be overfunded if people acted non-cooperatively:
1. Advocacy efforts to convince people to convert from other EA cause areas to your own. As I mentioned in the post, these can be valued negatively by other value systems.
2. Causes which increase (or decrease) the population. Some people disagree on whether creating more lives is on average good or bad (for example, some suffering-focused EAs may think that creating more human lives is good. Conversely, some people may think that creating more farm animal lives is on average good). This means that causes which increase (decrease) the population will be viewed as harmful by those who view population increases (decreases) as bad. Brian Tomasik’s example at the end of this post is along those lines.
So, in general, I don’t think I agree that the EA community is likely to not have major collective action problems. It seems more likely, though, that EA has solved most of its internal collective action problems through emphasizing cooperation.
One more example to add here of a cause which may be like a “public good” within the EA community: promoting international cooperation. Many important causes are global public goods (that is, causes which benefit the whole world and thus any one nation has an incentive to free-ride on other nations’ contributions), including global poverty, climate change, x-risk reduction, and animal welfare. I know that FHI already has some research on building international cooperation. I would guess that some EAs who primarily give to global poverty would be willing to shift funding towards building international cooperation if some EAs who normally give to AI safety do the same.