I think humanity is intellectually on a trajectory towards greater concern for non-human animals. But this is not a reliable argument. Trajectories can reverse or stall, and most of the world is likely to remain, at best, indifferent to and complicit in the increasing suffering of farmed animals for decades to come. We could easily “lock in” our (fairly horrific) modern norms.
But I think we should probably still lean towards preventing human extinction.
The main reason for this is the pursuit of convergent goals.
It’s just way harder to integrate pro-extinction actions into the other things that we care about and are trying to do as a movement.
We care about making people and animals healthier and happier, avoiding mass suffering events / pandemics / global conflict, improving global institutions, and pursuing moral progress. There are many actions that can improve these metrics—reducing pandemic risk, making AI safer, supporting global development, preventing great power conflict—which also tend to reduce extinction risk. But there are very few things we can do that improve these metrics while increasing x-risk.
Even if extinction itself would be positive expected value, trying to make humans go extinct is a bit all-or-nothing, and you probably won’t ever be presented with a choice where x-risk is the only variable at play. Most of the things you can do that increase human x-risk at the margins also probably increase the chance of other bad things happening. This means that there are very few actions that you could take with a view towards increasing x-risk that are positive expected value.
I know this is hardly a rousing argument to inspire you in your career in biorisk, but I think it should at least help you guard against taking a stronger pro-extinction view.
No, there is no way to be confident.
I think humanity is intellectually on a trajectory towards greater concern for non-human animals. But this is not a reliable argument. Trajectories can reverse or stall, and most of the world is likely to remain, at best, indifferent to and complicit in the increasing suffering of farmed animals for decades to come. We could easily “lock in” our (fairly horrific) modern norms.
But I think we should probably still lean towards preventing human extinction.
The main reason for this is the pursuit of convergent goals.
It’s just way harder to integrate pro-extinction actions into the other things that we care about and are trying to do as a movement.
We care about making people and animals healthier and happier, avoiding mass suffering events / pandemics / global conflict, improving global institutions, and pursuing moral progress. There are many actions that can improve these metrics—reducing pandemic risk, making AI safer, supporting global development, preventing great power conflict—which also tend to reduce extinction risk. But there are very few things we can do that improve these metrics while increasing x-risk.
Even if extinction itself would be positive expected value, trying to make humans go extinct is a bit all-or-nothing, and you probably won’t ever be presented with a choice where x-risk is the only variable at play. Most of the things you can do that increase human x-risk at the margins also probably increase the chance of other bad things happening. This means that there are very few actions that you could take with a view towards increasing x-risk that are positive expected value.
I know this is hardly a rousing argument to inspire you in your career in biorisk, but I think it should at least help you guard against taking a stronger pro-extinction view.