Here are three toy existential catastrophe scenarios to think about:
A biological catastrophe (potentially from AI) which kills all humans and leaves animals mostly untouched, due to their biology
A paperclipping-style AI takeover scenario where AI turns everything into something else
A human disempowerment scenario where humans are left alive but substantively lose control over the future and its direction
I think it would be pretty interesting to think about interventions one could take to make the world persistently better for wild animals in the event that humans go extinct from biological catastrophe. I’m not sure you could do much, but it could be very impactful if worst-case bio gets bad enough!
My view is that bio x-risk is fairly low, so the scenarios where there are no humans but there are nonhuman animals (in the near future) are pretty unexpected.
In the first of these, I think most of the EV comes from whether technologically-capable intelligence evolves or not. I’m more likely or not on that (for say extinction via bio-catastrophe), but not above 90%.
Have you thought about whether there any interventions that could transmit human values to this technologically capable intelligence? The complete works of Bentham and an LLM on a ruggedised solar powered laptop that helps them translate English into their language...
Not very leveraged given the fraction within a fraction within a fraction of success, but maybe worth one marginal person.
Here are three toy existential catastrophe scenarios to think about:
A biological catastrophe (potentially from AI) which kills all humans and leaves animals mostly untouched, due to their biology
A paperclipping-style AI takeover scenario where AI turns everything into something else
A human disempowerment scenario where humans are left alive but substantively lose control over the future and its direction
I think it would be pretty interesting to think about interventions one could take to make the world persistently better for wild animals in the event that humans go extinct from biological catastrophe. I’m not sure you could do much, but it could be very impactful if worst-case bio gets bad enough!
My view is that bio x-risk is fairly low, so the scenarios where there are no humans but there are nonhuman animals (in the near future) are pretty unexpected.
In the first of these, I think most of the EV comes from whether technologically-capable intelligence evolves or not. I’m more likely or not on that (for say extinction via bio-catastrophe), but not above 90%.
Have you thought about whether there any interventions that could transmit human values to this technologically capable intelligence? The complete works of Bentham and an LLM on a ruggedised solar powered laptop that helps them translate English into their language...
Not very leveraged given the fraction within a fraction within a fraction of success, but maybe worth one marginal person.