I have a question, and then a consideration that motivates it, which is also framed as a question that you can answer if you like.
If an existential catastrophe occurs, how likely is it to wipe out all animal sentience on earth?
I’ve already asked that question here (and also, to some acquaintances working in AI Safety, but the answers have very much differed—it seems we’re quite far from a consensus on this, so it would be interesting to see perspectives from the varied voices taking part in this symposium.
Less important question, but that may clarify what motivates me to ask my main question: if you believe that a substantial part of X-risk scenarios entail animal sentience being left behind, do you then think that estimating the current and possible future welfare of wild animals is an important factor in evaluating the value of both existential risk reduction and interventions aimed at influencing the future? A few days ago, I was planning on making a post on invertebrate sentience being a possible crucial consideration when evaluating the value and disvalue of X-risk scenarios, but then thought that if this factor was rarely brought up, it could be that I was personally uninformed on the reasons why the experiences of invertebrates (if they are sentient) might not actually matter that much in future trajectories (aside from the possibility that they will all go extinct soon, which is why this question hinges on the prior belief that it is likely that sentient animals will continue existing on earth for a long time). There are probably different reasons to agree (or disagree) with this, and I’d be happy to hear yours in short, though it’s not as important to me as my first question. Thank you for doing this!
Here are three toy existential catastrophe scenarios to think about:
A biological catastrophe (potentially from AI) which kills all humans and leaves animals mostly untouched, due to their biology
A paperclipping-style AI takeover scenario where AI turns everything into something else
A human disempowerment scenario where humans are left alive but substantively lose control over the future and its direction
I think it would be pretty interesting to think about interventions one could take to make the world persistently better for wild animals in the event that humans go extinct from biological catastrophe. I’m not sure you could do much, but it could be very impactful if worst-case bio gets bad enough!
My view is that bio x-risk is fairly low, so the scenarios where there are no humans but there are nonhuman animals (in the near future) are pretty unexpected.
In the first of these, I think most of the EV comes from whether technologically-capable intelligence evolves or not. I’m more likely or not on that (for say extinction via bio-catastrophe), but not above 90%.
Have you thought about whether there any interventions that could transmit human values to this technologically capable intelligence? The complete works of Bentham and an LLM on a ruggedised solar powered laptop that helps them translate English into their language...
Not very leveraged given the fraction within a fraction within a fraction of success, but maybe worth one marginal person.
I have a question, and then a consideration that motivates it, which is also framed as a question that you can answer if you like.
If an existential catastrophe occurs, how likely is it to wipe out all animal sentience on earth?
I’ve already asked that question here (and also, to some acquaintances working in AI Safety, but the answers have very much differed—it seems we’re quite far from a consensus on this, so it would be interesting to see perspectives from the varied voices taking part in this symposium.
Less important question, but that may clarify what motivates me to ask my main question: if you believe that a substantial part of X-risk scenarios entail animal sentience being left behind, do you then think that estimating the current and possible future welfare of wild animals is an important factor in evaluating the value of both existential risk reduction and interventions aimed at influencing the future? A few days ago, I was planning on making a post on invertebrate sentience being a possible crucial consideration when evaluating the value and disvalue of X-risk scenarios, but then thought that if this factor was rarely brought up, it could be that I was personally uninformed on the reasons why the experiences of invertebrates (if they are sentient) might not actually matter that much in future trajectories (aside from the possibility that they will all go extinct soon, which is why this question hinges on the prior belief that it is likely that sentient animals will continue existing on earth for a long time). There are probably different reasons to agree (or disagree) with this, and I’d be happy to hear yours in short, though it’s not as important to me as my first question. Thank you for doing this!
Here are three toy existential catastrophe scenarios to think about:
A biological catastrophe (potentially from AI) which kills all humans and leaves animals mostly untouched, due to their biology
A paperclipping-style AI takeover scenario where AI turns everything into something else
A human disempowerment scenario where humans are left alive but substantively lose control over the future and its direction
I think it would be pretty interesting to think about interventions one could take to make the world persistently better for wild animals in the event that humans go extinct from biological catastrophe. I’m not sure you could do much, but it could be very impactful if worst-case bio gets bad enough!
My view is that bio x-risk is fairly low, so the scenarios where there are no humans but there are nonhuman animals (in the near future) are pretty unexpected.
In the first of these, I think most of the EV comes from whether technologically-capable intelligence evolves or not. I’m more likely or not on that (for say extinction via bio-catastrophe), but not above 90%.
Have you thought about whether there any interventions that could transmit human values to this technologically capable intelligence? The complete works of Bentham and an LLM on a ruggedised solar powered laptop that helps them translate English into their language...
Not very leveraged given the fraction within a fraction within a fraction of success, but maybe worth one marginal person.