I think this message should be emphasized much more in many EA and LT contexts, e.g. introductory materials on effectivealtruism.org and 80000hours.org.
As your paper points out: longtermist axiology probably changes the ranking between x-risk and catastrophic risk interventions in some cases. But there’s lots of convergence, and in practice your ranked list of interventions won’t change much (even if the diff between them does… after you adjust for cluelessness, Pascal’s mugging, etc).
Some worry that if you’re a fan of longtermist axiology then this approach to comms is disingenous. I strongly disagree: it’s normal to start your comms by finding common ground, and elaborate on your full reasoning later on.
Andrew Leigh MP seems to agree. Here’s the blurb from his recent book, “What’s The Worst That Could Happen?”:
Did you know that you’re more likely to die from a catastrophe than in a car crash? The odds that a typical US resident will die from a catastrophic event—for example, nuclear war, bioterrorism, or out-of-control artificial intelligence—have been estimated at 1 in 6. That’s fifteen times more likely than a fatal car crash and thirty-one times more likely than being murdered. In What’s the Worst That Could Happen?, Andrew Leigh looks at catastrophic risks and how to mitigate them, arguing provocatively that the rise of populist politics makes catastrophe more likely.
The message I take is that there’s potentially a big difference between these two questions:
Which government policies should one advocate for?
For an impartial individual, what are the best causes and interventions to work on?
Most of effective altruism, including 80,000 Hours, has focused on the second question.
This paper makes a good case for an answer to the first, but doesn’t tell us much about the second.
If you only value the lives of the present generation, it’s not-at-all obvious that marginal investment in reducing catastrophic risk beats funding GiveWell-recommended charities (or if animal lives are included, fighting factory farming). And this paper doesn’t make that case.
I think the mistake people have made is not to distinguish more clearly between these two questions, both in discussion of what’s best and in choice of strategy.
People often criticise effective altruism because they interpret the suggestions aimed at individuals as policy proposals (“but if everyone did this...”). But if community members are not clearly distinguishing the two perspectives, to some degree that’s fair – you can see from this paper why you would not want to turn over control of the government to strong longtermists. If the community is going to expand from philanthropy to policy, it needs to rethink what proposals it advocates for and how they are justified.
Much past longtermist advocacy arguably fits into the framework of trying to get people to increase their altruistic willingness to pay to help future generations, and I think make sense on those grounds. Though again perhaps could be clearer that the question of what governments should do all-considered given people’s current willingnesses to pay is a different question.
Thank you (again) for this.
I think this message should be emphasized much more in many EA and LT contexts, e.g. introductory materials on effectivealtruism.org and 80000hours.org.
As your paper points out: longtermist axiology probably changes the ranking between x-risk and catastrophic risk interventions in some cases. But there’s lots of convergence, and in practice your ranked list of interventions won’t change much (even if the diff between them does… after you adjust for cluelessness, Pascal’s mugging, etc).
Some worry that if you’re a fan of longtermist axiology then this approach to comms is disingenous. I strongly disagree: it’s normal to start your comms by finding common ground, and elaborate on your full reasoning later on.
Andrew Leigh MP seems to agree. Here’s the blurb from his recent book, “What’s The Worst That Could Happen?”:
The message I take is that there’s potentially a big difference between these two questions:
Which government policies should one advocate for?
For an impartial individual, what are the best causes and interventions to work on?
Most of effective altruism, including 80,000 Hours, has focused on the second question.
This paper makes a good case for an answer to the first, but doesn’t tell us much about the second.
If you only value the lives of the present generation, it’s not-at-all obvious that marginal investment in reducing catastrophic risk beats funding GiveWell-recommended charities (or if animal lives are included, fighting factory farming). And this paper doesn’t make that case.
I think the mistake people have made is not to distinguish more clearly between these two questions, both in discussion of what’s best and in choice of strategy.
People often criticise effective altruism because they interpret the suggestions aimed at individuals as policy proposals (“but if everyone did this...”). But if community members are not clearly distinguishing the two perspectives, to some degree that’s fair – you can see from this paper why you would not want to turn over control of the government to strong longtermists. If the community is going to expand from philanthropy to policy, it needs to rethink what proposals it advocates for and how they are justified.
A third question is:
3. What values should one morally advocate for?
The answers to this could be different yet again.
Much past longtermist advocacy arguably fits into the framework of trying to get people to increase their altruistic willingness to pay to help future generations, and I think make sense on those grounds. Though again perhaps could be clearer that the question of what governments should do all-considered given people’s current willingnesses to pay is a different question.