I wonder if EA as it currently exists can be reframed into more cooperative terms, which could make it safer to promote. I’m speculating here, but I’d be interested in thoughts.
One approach to cause prioritisation is to ask “what would be the ideal allocation of effort by the whole world?” (taking account of everyone’s values & all the possible gains from trade), and then to focus on whichever opportunities are most underinvested in vs. that ideal, and where you have the most comparative advantage compared to other actors. I’ve heard researchers in EA saying they sometimes think in these terms already. I think something like this is where a ‘cooperation first’ approach to cause selection would lead you.
My guess is that there’s a good chance this approach would lead EA to support similar areas to what we do currently. For instance, existential risks are often pitched as a global public good problem i.e. I think that on balance, people would prefer there was more effort going into mitigation (since most people prefer not to die, and have some concern for future generations). But our existing institutions are not delivering this, and so EAs might aim to fill the gap, so long as we think we have comparative advantage addressing these issues (and until the point where institutions can be improved that this is no longer needed).
I expect we could also see work on global poverty in these terms. On balance, people would prefer global poverty to disappear (especially if we consider the interests of the poor themselves), but the division into nation states makes it hard for the world to achieve that.
This becomes even more likely if we think that the values of future generations & animals should also be considered when we construct the ‘world portfolio’ of effort. If these values were taken into account, then currently the world would, for instance, spend heavily on existential risk reduction & other investments that benefit the future, but we don’t. It seems a bit like the present generation is failing to cooperate with future generations. EA’s cause priorities aim to redress this failure.
In short, the current priorities seem cooperative to me, but the justification is often framed in marginal terms, and maybe that style of justification subtly encourages an uncooperative mindset.
I agree with your intuition that with what a “cooperative” cause prioritization might look like. Although I do think a lot more work would need to be done to formalize this. I also think it may not make sense to use cooperative cause prioritization: if everyone else always acts non-cooperatively, you should too.
I’m actually pretty skeptical of the idea that EA tends to fund causes which are widely valued by people as a whole. It could be true, but it seems like it would be a very convenient coincidence. EA seems to be made up of people with pretty unique value systems (this, I’d expect, is partly what leads EAs to view some causes as being orders of magnitude more important than the causes that other people choose to fund). It would be surprising if optimizing independently for the average EA value system leads to the same funding choices as would optimizing for some combination of the value systems in the general population. While I agree that global poverty work seems to be pretty broadly valued (many governments and international organizations are devoted to it), I’m unsure about things like x-risk reduction. Have you seen any evidence that that is broadly popular? Does the UN have an initiative on x-risk?
I would imagine that work which improves institutions is one cause area which would look significantly more important in the cooperative framework. As I mention in the post, governments are one of the main ways that groups of people solve collective action problems, so improving their functioning would probably benefit most value systems. This would involve improving both formal institutions (constitutions), or informal institutions (civic social norms). In the cooperative equilibrium, we could all be made better off because people of all different value systems would put a significant amount of resources towards building and maintaining strong institutions.
A (tentative) response to your second to last paragraph: the preferences of animals and future generations would probably not be directly considered when constructing the cooperative world portfolio. Gains from cooperation come from people who have control over resources working together so that they’re better off than in the case where they independently spend their resources. Animals do not control any resources, so there are no gains from cooperating with them. Just like in the non-cooperative case, the preferences of animals will only be reflected indirectly due to people who care about animals (just to be clear: I do think that we should care about animals and future people). I expect this is mostly true of future generations as well, but maybe there is some room for inter-temporal cooperation.
Interesting. My personal view is that the neglect of future generations is likely ‘where the action is’ in cause prioritisation, so if you exclude their interests from the cooperative portfolio, then I’m less interested in the project.
I’d still agree that we should factor in cooperation, but my intuition is then that it’s going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I’d be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?
The point about putting more emphasis on international coordination and improving institutions seems reasonable, though again, I’d wonder if it’s enough to trump the lower neglectedness.
Either way, it seems a bit odd to describe longtermist EAs who are trying to help future generations as ‘uncooperative’. It’s more like they’re trying to ‘cooperate’ with future people, even if direct trade isn’t possible.
On the point about whether the present generation values x-risk, one way to illustrate it is that value of a statistical life in the US is about $5m. This means that US citizens alone would be willing to pay, I think, 1.5 trillion dollars to avoid 0.1ppt of existential risk.
Will MacAskill used this as an argument that the returns on x-risk reduction must be lower than they seem (e.g. perhaps the risks are actually much lower), which may be right, but still illustrates the idea that present people significantly value existential risk reduction.
I’d still agree that we should factor in cooperation, but my intuition is then that it’s going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I’d be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?
I think one point worth emphasizing is that if the cooperative portfolio is a pareto improvement, then theoretically no altruist, including longtermist EAs, can be made worse off by switching to the cooperative portfolio.
Therefore, even if future generations are heavily neglected, the cooperative portfolio is better according to longtermist EAs (and thus for future generations) than the competitive equilibrium. It may still be too costly to move towards the competitive equilibrium, and it is non-obvious to me how the neglect of future generations changes the cost of trying to move society towards the cooperative portfolio or the gain of defecting. But if the cost of moving society to the cooperative portfolio is very low then we should probably cooperate even if future generations are very neglected.
I’d be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?
The more general model captured the idea that there are almost always gains from cooperation between those looking to do good. It doesn’t show, however, that those gains are necessarily large relative to the costs of building cooperation (including opportunity costs). I’m not sure what the answer is to that.
Here’s one line of reasoning which makes me think the net gains from cooperation may be large. Setting aside the possibility that everyone has near identical valuations of causes, I think we’re left with two likely scenarios:
1. There’s enough overlap in valuations of direct-work to create significant gains from compromise on direct-work (maybe on the order of doubling each persons impact). This is like example A in the post.
2. Valuations of direct work are so far apart (everyone thinks that their cause area is 100x more valuable than others) that we’re nearly in the situation from example D, and there will be relatively small gains from building cooperation on direct work. However, this creates opportunities for huge externalities due to advocacy, which means that the actual setting is closer to example B. Intuition: If you think x-risk mitigation is orders of magnitude more important than global poverty, then an intervention which persuades someone to switch from working on global poverty to x-risk will also have massive gains (and have massively negative impact from the perspective of the person who strongly prefers global poverty). I don’t think this is a minor concern. It seems like a lot of resources get wasted in politics due to people with nearly perpendicular value systems fighting each other through persuasion and other means.
So, in either case, it seems like the gains from cooperation are large.
I’d still agree that we should factor in cooperation, but my intuition is then that it’s going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation.
For now, I don’t think any major changes in decisions should be made based on this. We don’t know enough about how difficult it would be to build cooperation and what the gains to cooperation would be. I guess the only concrete recommendation may be to more strongly emphasize the “not being a jerk” part of effective altruism (especially because that can often be in major conflict with the “maximize impact” part). Also I would argue that there’s a chance that cooperation could be very important and so it’s worth researching more.
I wonder if EA as it currently exists can be reframed into more cooperative terms, which could make it safer to promote. I’m speculating here, but I’d be interested in thoughts.
One approach to cause prioritisation is to ask “what would be the ideal allocation of effort by the whole world?” (taking account of everyone’s values & all the possible gains from trade), and then to focus on whichever opportunities are most underinvested in vs. that ideal, and where you have the most comparative advantage compared to other actors. I’ve heard researchers in EA saying they sometimes think in these terms already. I think something like this is where a ‘cooperation first’ approach to cause selection would lead you.
My guess is that there’s a good chance this approach would lead EA to support similar areas to what we do currently. For instance, existential risks are often pitched as a global public good problem i.e. I think that on balance, people would prefer there was more effort going into mitigation (since most people prefer not to die, and have some concern for future generations). But our existing institutions are not delivering this, and so EAs might aim to fill the gap, so long as we think we have comparative advantage addressing these issues (and until the point where institutions can be improved that this is no longer needed).
I expect we could also see work on global poverty in these terms. On balance, people would prefer global poverty to disappear (especially if we consider the interests of the poor themselves), but the division into nation states makes it hard for the world to achieve that.
This becomes even more likely if we think that the values of future generations & animals should also be considered when we construct the ‘world portfolio’ of effort. If these values were taken into account, then currently the world would, for instance, spend heavily on existential risk reduction & other investments that benefit the future, but we don’t. It seems a bit like the present generation is failing to cooperate with future generations. EA’s cause priorities aim to redress this failure.
In short, the current priorities seem cooperative to me, but the justification is often framed in marginal terms, and maybe that style of justification subtly encourages an uncooperative mindset.
I agree with your intuition that with what a “cooperative” cause prioritization might look like. Although I do think a lot more work would need to be done to formalize this. I also think it may not make sense to use cooperative cause prioritization: if everyone else always acts non-cooperatively, you should too.
I’m actually pretty skeptical of the idea that EA tends to fund causes which are widely valued by people as a whole. It could be true, but it seems like it would be a very convenient coincidence. EA seems to be made up of people with pretty unique value systems (this, I’d expect, is partly what leads EAs to view some causes as being orders of magnitude more important than the causes that other people choose to fund). It would be surprising if optimizing independently for the average EA value system leads to the same funding choices as would optimizing for some combination of the value systems in the general population. While I agree that global poverty work seems to be pretty broadly valued (many governments and international organizations are devoted to it), I’m unsure about things like x-risk reduction. Have you seen any evidence that that is broadly popular? Does the UN have an initiative on x-risk?
I would imagine that work which improves institutions is one cause area which would look significantly more important in the cooperative framework. As I mention in the post, governments are one of the main ways that groups of people solve collective action problems, so improving their functioning would probably benefit most value systems. This would involve improving both formal institutions (constitutions), or informal institutions (civic social norms). In the cooperative equilibrium, we could all be made better off because people of all different value systems would put a significant amount of resources towards building and maintaining strong institutions.
A (tentative) response to your second to last paragraph: the preferences of animals and future generations would probably not be directly considered when constructing the cooperative world portfolio. Gains from cooperation come from people who have control over resources working together so that they’re better off than in the case where they independently spend their resources. Animals do not control any resources, so there are no gains from cooperating with them. Just like in the non-cooperative case, the preferences of animals will only be reflected indirectly due to people who care about animals (just to be clear: I do think that we should care about animals and future people). I expect this is mostly true of future generations as well, but maybe there is some room for inter-temporal cooperation.
Interesting. My personal view is that the neglect of future generations is likely ‘where the action is’ in cause prioritisation, so if you exclude their interests from the cooperative portfolio, then I’m less interested in the project.
I’d still agree that we should factor in cooperation, but my intuition is then that it’s going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I’d be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?
The point about putting more emphasis on international coordination and improving institutions seems reasonable, though again, I’d wonder if it’s enough to trump the lower neglectedness.
Either way, it seems a bit odd to describe longtermist EAs who are trying to help future generations as ‘uncooperative’. It’s more like they’re trying to ‘cooperate’ with future people, even if direct trade isn’t possible.
On the point about whether the present generation values x-risk, one way to illustrate it is that value of a statistical life in the US is about $5m. This means that US citizens alone would be willing to pay, I think, 1.5 trillion dollars to avoid 0.1ppt of existential risk.
Will MacAskill used this as an argument that the returns on x-risk reduction must be lower than they seem (e.g. perhaps the risks are actually much lower), which may be right, but still illustrates the idea that present people significantly value existential risk reduction.
I think one point worth emphasizing is that if the cooperative portfolio is a pareto improvement, then theoretically no altruist, including longtermist EAs, can be made worse off by switching to the cooperative portfolio.
Therefore, even if future generations are heavily neglected, the cooperative portfolio is better according to longtermist EAs (and thus for future generations) than the competitive equilibrium. It may still be too costly to move towards the competitive equilibrium, and it is non-obvious to me how the neglect of future generations changes the cost of trying to move society towards the cooperative portfolio or the gain of defecting. But if the cost of moving society to the cooperative portfolio is very low then we should probably cooperate even if future generations are very neglected.
The more general model captured the idea that there are almost always gains from cooperation between those looking to do good. It doesn’t show, however, that those gains are necessarily large relative to the costs of building cooperation (including opportunity costs). I’m not sure what the answer is to that.
Here’s one line of reasoning which makes me think the net gains from cooperation may be large. Setting aside the possibility that everyone has near identical valuations of causes, I think we’re left with two likely scenarios:
1. There’s enough overlap in valuations of direct-work to create significant gains from compromise on direct-work (maybe on the order of doubling each persons impact). This is like example A in the post.
2. Valuations of direct work are so far apart (everyone thinks that their cause area is 100x more valuable than others) that we’re nearly in the situation from example D, and there will be relatively small gains from building cooperation on direct work. However, this creates opportunities for huge externalities due to advocacy, which means that the actual setting is closer to example B. Intuition: If you think x-risk mitigation is orders of magnitude more important than global poverty, then an intervention which persuades someone to switch from working on global poverty to x-risk will also have massive gains (and have massively negative impact from the perspective of the person who strongly prefers global poverty). I don’t think this is a minor concern. It seems like a lot of resources get wasted in politics due to people with nearly perpendicular value systems fighting each other through persuasion and other means.
So, in either case, it seems like the gains from cooperation are large.
For now, I don’t think any major changes in decisions should be made based on this. We don’t know enough about how difficult it would be to build cooperation and what the gains to cooperation would be. I guess the only concrete recommendation may be to more strongly emphasize the “not being a jerk” part of effective altruism (especially because that can often be in major conflict with the “maximize impact” part). Also I would argue that there’s a chance that cooperation could be very important and so it’s worth researching more.