I’d like to thank Parker Whitfill, Andrew Kao, Stefan Schubert, and Phil Trammell for very helpful comments. Errors are my own.
Many people have argued that those involved in effective altruism should “be nice”, meaning that they should cooperate when facing prisoner’s dilemma type situations ([1] [2] [3]). While I believe that some of these are convincing arguments, it seems to be underappreciated just how often someone attempting to do good will face prisoner’s dilemmas. Previous authors seem to highlight mostly zero-sum conflict between opposing value systems [3] [4] or common-sense social norms like lying [1]. However, the problem faced by a group of people trying to do good is effectively a public goods problem [10]; this means that, except in rare cases (like where people 100% agree on moral values), someone looking to do good will be playing a prisoner’s dilemma against others looking to do good.
In this post I first give some simple examples to illustrate how collective action problems almost surely arise between a group of people looking to do good. I then argue that the standard cause-prioritization methodology used within EA recommends to defect (“free-ride”) in these prisoner’s dilemma settings. Finally, I discuss some potential implications of this, including that there may be harms from popularizing EA thinking and that there may be large gains from improving cooperation.
Main Points:
1. A group of people trying to do good are playing a form of a public goods game. Except in rare circumstances, this will lead to inefficiencies due to free-riding (defecting), and thus gains from cooperation.
2. Free-riding comes from individuals putting resources toward causes which they personally view as neglected (being under-valued by other people’s value systems) at the expense of causes for which there is more consensus.
3. Standard EA cause prioritization recommends that people free-ride on others’ efforts to do good (at least when interacting with people not in the EA community).
4. If existing societal norms are to cooperate when trying to do good, EA may cause harm by encouraging people to free-ride.
5. There may be large gains from improving cooperation.
Collective Action Problems Among People Trying to do Good
Note that the main argument in this section is not original to me. Others within EA have written about this, some in more general settings than what I look at here [10].
The standard collective action problem is in a setting where people are selfish (each individual cares about their own consumption) but there’s some public good, say clean air, that they all value. The main issue is that when deciding whether to pollute the air or not, an individual doesn’t consider the negative impacts that pollution will have on everyone else. This creates a prisoner’s dilemma, where they would all be better off if they didn’t pollute, but any individual is better off by polluting (defecting). These problems are often solved through governments or through informal norms of cooperation.
Here I argue that this collective action problem is almost surely present among a group of people trying to do good, even if every member of the group is completely unselfish. All that is needed is that people’s value systems place some weight on how good the world is (they are not simply warm-glow givers) and that they have some disagreement about what counts as good (there’s some difference in values). The key intuition is that in an uncooperative setting each altruist will donate to causes based on their own value system without considering how much other altruists value those causes. This leads to underinvestment in causes which many different value systems place positive weight on (causes with positive externalities for other value systems) and overinvestment in causes which many value systems view negatively (causes with negative externalities). Except in a few unlikely circumstances, an allocation can be found which is preferred by every value system (a pareto improvement) over the non-cooperative equilibrium, just like with any other public goods game.
For most readers, I expect that the examples below will get the main point across. If anyone is especially interested, here is a more general model of altruistic coordination that I used to check the intuition.
Examples
A. Two funders, positive externalities
Take a situation with two funders: a total utilitarian and an environmentalist (taken to mean someone who intrinsically values environmental preservation). Each has a total of $1000 to donate. The total utilitarian thinks that climate change mitigation is a very important cause, but they would prefer that funding instead goes toward AI safety research, which they think is about 50% more important than climate change. The environmentalist also thinks climate change mitigation is important, but they would prefer to spend money on near-term conservation efforts, which they view as being 50% more important than climate change. The environmentalist places almost no value on AI safety research and the total utilitarian places almost no value on near-term conservation efforts. If they don’t cooperate, the unique Nash equilibrium has them both spending their money on their own preferred causes, so $1000 goes to AI safety, $1000 to conservation, and $0 to climate change. If they could cooperatively allocate donations, they would choose to give all of the money ($2000) to climate change, which gives each of them a payoff 33% higher than in the non-cooperative case.
B. Two funders, negative externalities
The gains from cooperation would be even larger if each funder placed negative value on the other funder’s preferred cause. For example, if one funder’s preferred cause was pro-choice advocacy and the other’s was pro-life advocacy, then their payoffs in the non-cooperative setting may be nearly zero (their donations cancel each other out), which means the cooperative setting will have nearly infinitely higher payoffs in percentage terms. This idea has been noted before in writings on moral trade [4].
Importantly, even if funders’ preferences for direct work lead to no negative externalities, there could be negative externalities in their preferences for advocacy. For example, in the situation in example A, neither funder places negative value on the other funder’s preferred cause. However, if we allow the utilitarian to fund advocacy which persuades people to donate to AI safety rather than climate change or conservation, this advocacy would be negatively valued by the environmentalist. Thus, even small differences in preferences for direct work can lead to zero-sum conflict on the advocacy front (for further discussion see [3] and [12]).
C. Multiple funders, positive externalities
Now notice that we could add a third funder to example A who was in a symmetric situation (say they valued anti-aging research, which the other two funders hardly value at all, 50% more than climate change, but place no value on AI safety or conservation). In this case the gains from cooperating (putting all the money into climate change research) increase to 50% for each person. In general, adding funders with their own “weird” cause will increase the gains from cooperating on causes for which there is more consensus.
D. No externalities
One case where cooperation does not lead to any gains is where people’s value systems are perfectly perpendicular to each other, so that there are no externalities. The most famous example of this is an economy with selfish individuals (so everyone only cares about their own consumption and places no value, positive or negative, on the consumption of others). The non-cooperative equilibrium in this setting will be efficient, meaning that there can be no gains from cooperation (footnote: this is similar to the first-welfare theorem). This could also occur (although I think it’s very unlikely) in a setting with altruistic individuals. In the setting from example A, if we change preferences so that both the environmentalist and the utilitarian place no value on climate change, then the non-cooperative equilibrium of the game cannot be improved upon. However, as was noted above, the possibility of advocacy can create negative externalities between funders, and thus significant opportunities for cooperation. Also, I think in reality we see significant overlap in values, leading to large positive externalities from donations to certain causes.
E. Identical Value Systems
Another case in which the non-cooperative equilibrium is efficient is when there is no value disagreement among funders. Imagine two total utilitarians in the setting from example A. They would both choose to fund AI safety research in the non-cooperative setting, which is also the cooperative choice.
However, notice that this conclusion depends on the assumption that people are perfectly moral. If we add that they are partially selfish, but still agree on what is morally right, then in the non-cooperative setting they will overinvest in their own personal consumption. This leads to gains from cooperating by spending more on the public good (AI safety), like in the classical collective action problem.
Perhaps the current EA community is close to having identical moral value systems (and is mostly unselfish) to the point where the gains from cooperation are low. I expect that this isn’t true. It seems like there is a lot of heterogeneity in value systems within EA, and even small value differences can lead to a lot of inefficiency due to the advocacy channel mentioned above [12]. Also, even if people’s moral values are identical, there seems be a lot of disagreement about difficult-to-answer empirical questions within EA (such as the question of whether we are living in the most important century [13]). These disagreements, as long as they persist, also lead to collective action problems.
EA Cause-Prioritization and Free-Riding
Having established that people attempting to do the most good are typically playing a prisoner’s dilemma, I now want to look at what EA organizations (mainly 80,000 Hours) have suggested for people to do. Here I would like to distinguish between cooperation with people involved in EA vs people outside of it. It seems that within EA it is commonly accepted that people should cooperate with people who have different values [2]. People often speak of maximizing “our” impact rather than my impact. And, importantly, people seem to disapprove of choices which benefit your own value system at the expense of others’ values.
With prisoner’s dilemmas against people outside of EA, it seems that the standard advice is to defect. In 80,000 Hours’ cause prioritization framework, the goal is to estimate the marginal benefit (measured by your value system, presumably) of an extra unit of resources being invested in a cause area [5]. No mention is given to how others value a cause, except to say that cause areas which you value a lot relative to others are likely to have the highest returns. This is exactly the logic of free-riding which led to coordination failures in the above examples: every individual makes decisions irrespective of the benefits or harms to other value systems, which leads to underinvestment in causes which many people value positively and overinvestment in causes which many value negatively.
The cause areas in example A were chosen because I think climate change is one area where EA is probably free-riding off other people’s efforts to do good. Given its wide range of negative consequences (harm to GDP, the global poor, animals, the environment, and extinction risk), a variety of moral systems place positive weight on mitigating climate change. Perhaps for this reason, governments and other groups are putting a large amount of resources towards the problem. This large amount of resources, along with the assumption of diminishing returns, has led many EAs to not put resources toward climate change (because it is not neglected), and instead focus on other cause areas. In effect, this a decision to free-ride on the climate change mitigation work being done by those with different value systems. I expect this is also the case for many other causes which EAs regard as “important but not neglected”.
What Should We Do About This?
Although I believe that the EA community frequently defects in prisoner’s dilemmas, I am much less certain about whether this is a bad thing. If everyone else is defecting, and it’s very costly to improve cooperation, then the best that we can do is to defect ourselves. However, if there currently is some cooperation going on, following EA advice could reduce that cooperation, and thus be sub-optimal. Furthermore, even if there isn’t much cooperation currently, working to improve cooperation could be more valuable than simply not cooperating, depending on how costly it is to do so.
Working to Not Destroy Cooperation
There are a few reasons why I think it’s possible that there’s currently some cooperation between people with different value systems. First is that a large literature in behavioral economics finds that people frequently cooperate when playing prisoner’s dilemmas, at least when they expect their opponent to also cooperate [6]. There is also a fair amount of research showing that studying economics causes people to defect more often in prisoner’s dilemmas [14]. Hopefully learning about effective altruism doesn’t lead to a similar behavior change among moral actors. However, it should be noted that in behavioral research the outcomes are typically monetary payoffs to participants. I’m not aware of any research showing that people tend to cooperate when the outcomes of the game are moral objectives (like in the examples I listed above). For all I know, people don’t cooperate much in such situations, and thus it would not be possible for EA to cause more defection.
Next, some criticisms of effective altruism seem to be in line with the concern that it will reduce cooperation among those who wish to do good. Daron Acemoglu’s criticism of effective altruism from 2015 is one example [7] (note that Acemoglu is one of the most influential economists in the world). Although much of his critique is on earning to give, I think the substance of the critique applies more generally. He claims that effective altruism often advocates for doing good in ways that have negative externalities for others (like earning to give through high frequency trading), and thus it may be harmful if it became normal to view earning to give as an ethical life. He thinks many existing norms are more beneficial, such as the view that things like civil service or community activism are ethical activities.
More generally, there is a lot of criticism of private philanthropy for being “undemocratic” [8]. Free-riding issues among those looking to do good are one basis for this criticism. The government is the main institution we have for cooperating to solve collective action problems, which includes collective action problems between those looking to do good. Although any individual could do more good by donating their time and money to private philanthropy (defecting), we all may be better off if we all worked through the government or through some other cooperative channel. The large amount of criticism of private philanthropy may be evidence that cooperative norms around doing good are somewhat common in society.
If the above stories are true, and there actually is a degree of cooperative behavior happening, then spreading the methodology currently used within EA could be harmful, as it could lead to a decrease in cooperation. One may think we can still use this methodology without advocating that others do it, which may avoid any negative consequences. This is basically the idea of defecting in secret. As Brian Tomasik discusses [1], this seems unlikely to succeed; if EA has any major successes, then even without any advocacy other people are likely to notice and to imitate our methodology.
Another implication of this is that further investments in EA cause prioritization could be harmful. One of the main differences between the cause prioritization work done by EA organizations and work more commonly done in economics is that EA cause prioritization takes the perspective of a benevolent individual rather than a government. Perhaps, as EA cause prioritization continues to improve, more people will choose to use their advice and act unilaterally rather than cooperatively.
I should also note that even if the above stories are true, the other benefits of EA (mainly, encouraging people to do good effectively) may outweigh any negative effects from reducing cooperation.
Working to Improve Cooperation
Even if there isn’t much cooperation currently happening, there could be large gains to working to build such cooperation. For example, if cooperative norms aren’t widespread, then we could work to build those norms. If the government is currently very dysfunctional and non-cooperative, then we can work to improve it. A number of EA initiatives already involve increasing cooperation, including:
1. Work on improving institutional decision-making [9] and international cooperation
2. Work on mechanism design for altruistic coordination [10]
3. CLR’s research initiative on cooperation [11]
The arguments given here only strengthen the case for working on those causes. There are also a number of academic literatures that could be valuable, including those on the private provision of public goods and group conflict.
There are some other important considerations here. One is that methods for building cooperation between a more like-minded group of people may not work for building cooperation among more diverse groups. For example, increasing the warm glow from fighting for a common cause may help solve collective action problems within a political party, but it may make it more difficult to get party members to support compromise with an opposing party (because compromise prevents them from getting warm-glow from fighting).
Also, there may be reasons to prioritize building mechanisms for cooperation within effective altruism before expanding to a more value-diverse group of people. Let’s assume that people of significantly different value systems to the average EA tend to mostly be inefficient in their efforts to do good. If they are introduced to EA, they will be able to more effectively achieve their goals, which may actually have negative externalities on those currently involved in EA (through the advocacy channels mentioned above, for example). Thus, it may be better to first develop good mechanisms for cooperation, so that once these other people are introduced to EA ideas it will be rational for them to cooperate as well.
Finally, and more speculatively, I expect that many ways to improve cooperation involve increasing returns to scale, at least in a narrow sense. For example, improving institutions at the national or international level may only succeed if a very large number of people participate, which may be very difficult to achieve if the current norm is that altruists don’t cooperate much (you have to convince everyone to coordinate on another equilibrium). More appealing would be to pursue methods of cooperating which provide benefits even if smaller numbers of people participate. This could include reforming local governments, one at a time, then taking the reforms to state and national governments. Or it could include building a mechanism for cooperating within effective altruism and then adding more people into that mechanism incrementally.
Conclusion
There is no general reason to believe that good outcomes will arise when every individual aims to do the most good with respect to their own value system. In fact, in standard settings (like a group of people independently choosing where to donate money), the outcome when individuals aim to maximize their own impact will almost surely be inefficient. This means that there can be large gains to cooperation between altruistic individuals. It also means that the effective altruism movement, which encourages individuals to maximize their impact, could have negative consequences.
Effective Altruism and Free Riding
I’d like to thank Parker Whitfill, Andrew Kao, Stefan Schubert, and Phil Trammell for very helpful comments. Errors are my own.
Many people have argued that those involved in effective altruism should “be nice”, meaning that they should cooperate when facing prisoner’s dilemma type situations ([1] [2] [3]). While I believe that some of these are convincing arguments, it seems to be underappreciated just how often someone attempting to do good will face prisoner’s dilemmas. Previous authors seem to highlight mostly zero-sum conflict between opposing value systems [3] [4] or common-sense social norms like lying [1]. However, the problem faced by a group of people trying to do good is effectively a public goods problem [10]; this means that, except in rare cases (like where people 100% agree on moral values), someone looking to do good will be playing a prisoner’s dilemma against others looking to do good.
In this post I first give some simple examples to illustrate how collective action problems almost surely arise between a group of people looking to do good. I then argue that the standard cause-prioritization methodology used within EA recommends to defect (“free-ride”) in these prisoner’s dilemma settings. Finally, I discuss some potential implications of this, including that there may be harms from popularizing EA thinking and that there may be large gains from improving cooperation.
Main Points:
1. A group of people trying to do good are playing a form of a public goods game. Except in rare circumstances, this will lead to inefficiencies due to free-riding (defecting), and thus gains from cooperation.
2. Free-riding comes from individuals putting resources toward causes which they personally view as neglected (being under-valued by other people’s value systems) at the expense of causes for which there is more consensus.
3. Standard EA cause prioritization recommends that people free-ride on others’ efforts to do good (at least when interacting with people not in the EA community).
4. If existing societal norms are to cooperate when trying to do good, EA may cause harm by encouraging people to free-ride.
5. There may be large gains from improving cooperation.
Collective Action Problems Among People Trying to do Good
Note that the main argument in this section is not original to me. Others within EA have written about this, some in more general settings than what I look at here [10].
The standard collective action problem is in a setting where people are selfish (each individual cares about their own consumption) but there’s some public good, say clean air, that they all value. The main issue is that when deciding whether to pollute the air or not, an individual doesn’t consider the negative impacts that pollution will have on everyone else. This creates a prisoner’s dilemma, where they would all be better off if they didn’t pollute, but any individual is better off by polluting (defecting). These problems are often solved through governments or through informal norms of cooperation.
Here I argue that this collective action problem is almost surely present among a group of people trying to do good, even if every member of the group is completely unselfish. All that is needed is that people’s value systems place some weight on how good the world is (they are not simply warm-glow givers) and that they have some disagreement about what counts as good (there’s some difference in values). The key intuition is that in an uncooperative setting each altruist will donate to causes based on their own value system without considering how much other altruists value those causes. This leads to underinvestment in causes which many different value systems place positive weight on (causes with positive externalities for other value systems) and overinvestment in causes which many value systems view negatively (causes with negative externalities). Except in a few unlikely circumstances, an allocation can be found which is preferred by every value system (a pareto improvement) over the non-cooperative equilibrium, just like with any other public goods game.
For most readers, I expect that the examples below will get the main point across. If anyone is especially interested, here is a more general model of altruistic coordination that I used to check the intuition.
Examples
A. Two funders, positive externalities
Take a situation with two funders: a total utilitarian and an environmentalist (taken to mean someone who intrinsically values environmental preservation). Each has a total of $1000 to donate. The total utilitarian thinks that climate change mitigation is a very important cause, but they would prefer that funding instead goes toward AI safety research, which they think is about 50% more important than climate change. The environmentalist also thinks climate change mitigation is important, but they would prefer to spend money on near-term conservation efforts, which they view as being 50% more important than climate change. The environmentalist places almost no value on AI safety research and the total utilitarian places almost no value on near-term conservation efforts. If they don’t cooperate, the unique Nash equilibrium has them both spending their money on their own preferred causes, so $1000 goes to AI safety, $1000 to conservation, and $0 to climate change. If they could cooperatively allocate donations, they would choose to give all of the money ($2000) to climate change, which gives each of them a payoff 33% higher than in the non-cooperative case.
B. Two funders, negative externalities
The gains from cooperation would be even larger if each funder placed negative value on the other funder’s preferred cause. For example, if one funder’s preferred cause was pro-choice advocacy and the other’s was pro-life advocacy, then their payoffs in the non-cooperative setting may be nearly zero (their donations cancel each other out), which means the cooperative setting will have nearly infinitely higher payoffs in percentage terms. This idea has been noted before in writings on moral trade [4].
Importantly, even if funders’ preferences for direct work lead to no negative externalities, there could be negative externalities in their preferences for advocacy. For example, in the situation in example A, neither funder places negative value on the other funder’s preferred cause. However, if we allow the utilitarian to fund advocacy which persuades people to donate to AI safety rather than climate change or conservation, this advocacy would be negatively valued by the environmentalist. Thus, even small differences in preferences for direct work can lead to zero-sum conflict on the advocacy front (for further discussion see [3] and [12]).
C. Multiple funders, positive externalities
Now notice that we could add a third funder to example A who was in a symmetric situation (say they valued anti-aging research, which the other two funders hardly value at all, 50% more than climate change, but place no value on AI safety or conservation). In this case the gains from cooperating (putting all the money into climate change research) increase to 50% for each person. In general, adding funders with their own “weird” cause will increase the gains from cooperating on causes for which there is more consensus.
D. No externalities
One case where cooperation does not lead to any gains is where people’s value systems are perfectly perpendicular to each other, so that there are no externalities. The most famous example of this is an economy with selfish individuals (so everyone only cares about their own consumption and places no value, positive or negative, on the consumption of others). The non-cooperative equilibrium in this setting will be efficient, meaning that there can be no gains from cooperation (footnote: this is similar to the first-welfare theorem). This could also occur (although I think it’s very unlikely) in a setting with altruistic individuals. In the setting from example A, if we change preferences so that both the environmentalist and the utilitarian place no value on climate change, then the non-cooperative equilibrium of the game cannot be improved upon. However, as was noted above, the possibility of advocacy can create negative externalities between funders, and thus significant opportunities for cooperation. Also, I think in reality we see significant overlap in values, leading to large positive externalities from donations to certain causes.
E. Identical Value Systems
Another case in which the non-cooperative equilibrium is efficient is when there is no value disagreement among funders. Imagine two total utilitarians in the setting from example A. They would both choose to fund AI safety research in the non-cooperative setting, which is also the cooperative choice.
However, notice that this conclusion depends on the assumption that people are perfectly moral. If we add that they are partially selfish, but still agree on what is morally right, then in the non-cooperative setting they will overinvest in their own personal consumption. This leads to gains from cooperating by spending more on the public good (AI safety), like in the classical collective action problem.
Perhaps the current EA community is close to having identical moral value systems (and is mostly unselfish) to the point where the gains from cooperation are low. I expect that this isn’t true. It seems like there is a lot of heterogeneity in value systems within EA, and even small value differences can lead to a lot of inefficiency due to the advocacy channel mentioned above [12]. Also, even if people’s moral values are identical, there seems be a lot of disagreement about difficult-to-answer empirical questions within EA (such as the question of whether we are living in the most important century [13]). These disagreements, as long as they persist, also lead to collective action problems.
EA Cause-Prioritization and Free-Riding
Having established that people attempting to do the most good are typically playing a prisoner’s dilemma, I now want to look at what EA organizations (mainly 80,000 Hours) have suggested for people to do. Here I would like to distinguish between cooperation with people involved in EA vs people outside of it. It seems that within EA it is commonly accepted that people should cooperate with people who have different values [2]. People often speak of maximizing “our” impact rather than my impact. And, importantly, people seem to disapprove of choices which benefit your own value system at the expense of others’ values.
With prisoner’s dilemmas against people outside of EA, it seems that the standard advice is to defect. In 80,000 Hours’ cause prioritization framework, the goal is to estimate the marginal benefit (measured by your value system, presumably) of an extra unit of resources being invested in a cause area [5]. No mention is given to how others value a cause, except to say that cause areas which you value a lot relative to others are likely to have the highest returns. This is exactly the logic of free-riding which led to coordination failures in the above examples: every individual makes decisions irrespective of the benefits or harms to other value systems, which leads to underinvestment in causes which many people value positively and overinvestment in causes which many value negatively.
The cause areas in example A were chosen because I think climate change is one area where EA is probably free-riding off other people’s efforts to do good. Given its wide range of negative consequences (harm to GDP, the global poor, animals, the environment, and extinction risk), a variety of moral systems place positive weight on mitigating climate change. Perhaps for this reason, governments and other groups are putting a large amount of resources towards the problem. This large amount of resources, along with the assumption of diminishing returns, has led many EAs to not put resources toward climate change (because it is not neglected), and instead focus on other cause areas. In effect, this a decision to free-ride on the climate change mitigation work being done by those with different value systems. I expect this is also the case for many other causes which EAs regard as “important but not neglected”.
What Should We Do About This?
Although I believe that the EA community frequently defects in prisoner’s dilemmas, I am much less certain about whether this is a bad thing. If everyone else is defecting, and it’s very costly to improve cooperation, then the best that we can do is to defect ourselves. However, if there currently is some cooperation going on, following EA advice could reduce that cooperation, and thus be sub-optimal. Furthermore, even if there isn’t much cooperation currently, working to improve cooperation could be more valuable than simply not cooperating, depending on how costly it is to do so.
Working to Not Destroy Cooperation
There are a few reasons why I think it’s possible that there’s currently some cooperation between people with different value systems. First is that a large literature in behavioral economics finds that people frequently cooperate when playing prisoner’s dilemmas, at least when they expect their opponent to also cooperate [6]. There is also a fair amount of research showing that studying economics causes people to defect more often in prisoner’s dilemmas [14]. Hopefully learning about effective altruism doesn’t lead to a similar behavior change among moral actors. However, it should be noted that in behavioral research the outcomes are typically monetary payoffs to participants. I’m not aware of any research showing that people tend to cooperate when the outcomes of the game are moral objectives (like in the examples I listed above). For all I know, people don’t cooperate much in such situations, and thus it would not be possible for EA to cause more defection.
Next, some criticisms of effective altruism seem to be in line with the concern that it will reduce cooperation among those who wish to do good. Daron Acemoglu’s criticism of effective altruism from 2015 is one example [7] (note that Acemoglu is one of the most influential economists in the world). Although much of his critique is on earning to give, I think the substance of the critique applies more generally. He claims that effective altruism often advocates for doing good in ways that have negative externalities for others (like earning to give through high frequency trading), and thus it may be harmful if it became normal to view earning to give as an ethical life. He thinks many existing norms are more beneficial, such as the view that things like civil service or community activism are ethical activities.
More generally, there is a lot of criticism of private philanthropy for being “undemocratic” [8]. Free-riding issues among those looking to do good are one basis for this criticism. The government is the main institution we have for cooperating to solve collective action problems, which includes collective action problems between those looking to do good. Although any individual could do more good by donating their time and money to private philanthropy (defecting), we all may be better off if we all worked through the government or through some other cooperative channel. The large amount of criticism of private philanthropy may be evidence that cooperative norms around doing good are somewhat common in society.
If the above stories are true, and there actually is a degree of cooperative behavior happening, then spreading the methodology currently used within EA could be harmful, as it could lead to a decrease in cooperation. One may think we can still use this methodology without advocating that others do it, which may avoid any negative consequences. This is basically the idea of defecting in secret. As Brian Tomasik discusses [1], this seems unlikely to succeed; if EA has any major successes, then even without any advocacy other people are likely to notice and to imitate our methodology.
Another implication of this is that further investments in EA cause prioritization could be harmful. One of the main differences between the cause prioritization work done by EA organizations and work more commonly done in economics is that EA cause prioritization takes the perspective of a benevolent individual rather than a government. Perhaps, as EA cause prioritization continues to improve, more people will choose to use their advice and act unilaterally rather than cooperatively.
I should also note that even if the above stories are true, the other benefits of EA (mainly, encouraging people to do good effectively) may outweigh any negative effects from reducing cooperation.
Working to Improve Cooperation
Even if there isn’t much cooperation currently happening, there could be large gains to working to build such cooperation. For example, if cooperative norms aren’t widespread, then we could work to build those norms. If the government is currently very dysfunctional and non-cooperative, then we can work to improve it. A number of EA initiatives already involve increasing cooperation, including:
1. Work on improving institutional decision-making [9] and international cooperation
2. Work on mechanism design for altruistic coordination [10]
3. CLR’s research initiative on cooperation [11]
The arguments given here only strengthen the case for working on those causes. There are also a number of academic literatures that could be valuable, including those on the private provision of public goods and group conflict.
There are some other important considerations here. One is that methods for building cooperation between a more like-minded group of people may not work for building cooperation among more diverse groups. For example, increasing the warm glow from fighting for a common cause may help solve collective action problems within a political party, but it may make it more difficult to get party members to support compromise with an opposing party (because compromise prevents them from getting warm-glow from fighting).
Also, there may be reasons to prioritize building mechanisms for cooperation within effective altruism before expanding to a more value-diverse group of people. Let’s assume that people of significantly different value systems to the average EA tend to mostly be inefficient in their efforts to do good. If they are introduced to EA, they will be able to more effectively achieve their goals, which may actually have negative externalities on those currently involved in EA (through the advocacy channels mentioned above, for example). Thus, it may be better to first develop good mechanisms for cooperation, so that once these other people are introduced to EA ideas it will be rational for them to cooperate as well.
Finally, and more speculatively, I expect that many ways to improve cooperation involve increasing returns to scale, at least in a narrow sense. For example, improving institutions at the national or international level may only succeed if a very large number of people participate, which may be very difficult to achieve if the current norm is that altruists don’t cooperate much (you have to convince everyone to coordinate on another equilibrium). More appealing would be to pursue methods of cooperating which provide benefits even if smaller numbers of people participate. This could include reforming local governments, one at a time, then taking the reforms to state and national governments. Or it could include building a mechanism for cooperating within effective altruism and then adding more people into that mechanism incrementally.
Conclusion
There is no general reason to believe that good outcomes will arise when every individual aims to do the most good with respect to their own value system. In fact, in standard settings (like a group of people independently choosing where to donate money), the outcome when individuals aim to maximize their own impact will almost surely be inefficient. This means that there can be large gains to cooperation between altruistic individuals. It also means that the effective altruism movement, which encourages individuals to maximize their impact, could have negative consequences.
References
[1] https://longtermrisk.org/reasons-to-be-nice-to-other-value-systems/
[2] https://80000hours.org/articles/coordination/
[3] https://rationalaltruist.com/2013/06/13/against-moral-advocacy/
[4] https://www.fhi.ox.ac.uk/wp-content/uploads/moral-trade-1.pdf
[5] https://80000hours.org/articles/problem-framework/
[6] https://www.sciencedirect.com/science/article/pii/S1574071406010086
[7] http://bostonreview.net/forum/logic-effective-altruism/daron-acemoglu-response-effective-altruism
[8] https://www.vox.com/future-perfect/2019/5/27/18635923/philanthropy-change-the-world-charity-phil-buchanan
[9] https://80000hours.org/problem-profiles/improving-institutional-decision-making/
[10] https://drive.google.com/file/d/1_Tob-zKBVBrnuQ0kWEBFFuuo_4A6WIRj/view
[11] https://longtermrisk.org/topic/cooperation/
[12] https://www.philiptrammell.com/blog/43
[13] https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1
[14] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5584942/