I also wanted to attempt to clarify 80k’s position a little.
With prisoner’s dilemmas against people outside of EA, it seems that the standard advice is to defect. In 80,000 Hours’ cause prioritization framework, the goal is to estimate the marginal benefit (measured by your value system, presumably) of an extra unit of resources being invested in a cause area [5]. No mention is given to how others value a cause, except to say that cause areas which you value a lot relative to others are likely to have the highest returns.
I agree this is the thrust of the article. However, also note that in the introduction we say:
However, if you’re coordinating with others in aiming to have an impact, then you also need to consider how their actions will change in response to what you do, which adds additional elements to the framework, which we cover here.
Within the section on scale we say:
It can also be useful to group instrumental sources of value within scale, such as gaining information about which issues are most important, or building a movement around a set of issues. Ideally, one would also capture the spillover benefits of progress on this problem on other problems. Coordination considerations, as briefly covered later, can also change how to assess scale.
Unfortunately, we haven’t yet done a great job of tying all these considerations together – coordination gets wedged in as an ‘advanced’ consideration; whereas maybe you need to start from a cooperative perspective, and totally reframe everything in those terms.
I’m still really unsure of all of these issues. How common are prisoner’s dilemma style situations for altruists? When we try to factor in greater cooperation, how will that change the practical rules of thumb? And how might that change how we explain EA? I’m very curious for more input and thinking on these questions.
I also wanted to attempt to clarify 80k’s position a little.
I agree this is the thrust of the article. However, also note that in the introduction we say:
Within the section on scale we say:
And then at the end, we have this section:
https://80000hours.org/articles/problem-framework/#how-to-factor-in-coordination
On the key ideas page, we also have a short section on coordination and link to:
https://80000hours.org/articles/coordination/
Which advocates compromising with other value systems.
And, there’s the section where we advocate not causing harm:
https://80000hours.org/key-ideas/#moral-uncertainty-and-moderation
Unfortunately, we haven’t yet done a great job of tying all these considerations together – coordination gets wedged in as an ‘advanced’ consideration; whereas maybe you need to start from a cooperative perspective, and totally reframe everything in those terms.
I’m still really unsure of all of these issues. How common are prisoner’s dilemma style situations for altruists? When we try to factor in greater cooperation, how will that change the practical rules of thumb? And how might that change how we explain EA? I’m very curious for more input and thinking on these questions.
Thanks for the clarification. I apologize for making it sound as if 80k specifically endorsed not cooperating.