I think wild animal suffering isn’t a long-term issue except in scenarios where we go extinct for non-AGI-related reasons. The three likeliest scenarios are:
Humans leverage AGI-related technologies in a way that promotes human welfare as well as (non-human) animal welfare.
Humans leverage AGI-related technologies in a way that promotes human welfare and is effectively indifferent to animal welfare.
Humans accidentally use AGI-related technologies in a way that is indifferent to human and animal welfare.
In all three scenarios, the decision-makers are likely to have “ambitious” goals that favor seizing more and more resources. In scenario 2, efficient resource use almost certainly implies that biological human bodies and brains get switched out for computing hardware running humans, and that wild animals are replaced with more computing hardware, energy/cooling infrastructure, etc. Even if biological humans who need food stick around for some reason, it’s unlikely that the optimal way to efficiently grow food in the long run will be “grow entire animals, wasting lots of energy on processes that don’t directly increase the quantity or quality of the food transmitted to humans”.
In scenario 1, wild animals might be euthanized, or uploaded to a substrate where they can live whatever number of high-quality lives seems best. This is by far the best scenario, especially for people who think (actual or potential) non-human animals might have at least some experiences that are of positive value, or at least some positive preferences that are worth fulfilling. I would consider this extremely likely if non-human animals are moral patients at all, though scenario 1 is also strongly preferable if we’re uncertain about this question and want to hedge our bets.
Scenario 3 has the same impact on wild animals as scenario 1, and for analogous reasons: resource limitations make it costly to keep wild animals around. 3 is much worse than 1 because human welfare matters so much; even if the average present-day human life turned out to be net-negative, this would be a contingent fact that could be addressed by improving global welfare.
I consider scenario 2 much less likely than scenarios 1 and 3; my point in highlighting it is to note that scenario 2 is similarly good for the purpose of preventing wild animal suffering. I also consider scenario 2 vastly more likely than “sadistic” scenarios where some agent is exerting deliberate effort to produce more suffering in the world, for non-instrumental reasons.
[Reinforcing Alice for giving more attention to this consideration despite the fact that it’s unpleasant for her]
Maybe something like spreading cooperative agents, which is helpful both if things go well or not well.
[speculative]
What is meant by “cooperative agents”? Personally, I suspect “cooperativeness” is best split into multiple dimensions, analogous to lawful/chaotic and good/evil in a roleplaying game. My sense is that
humanity is made up of competing groups
bigger groups tend to be more powerful
groups get big because they are made up of humans who are capable of large-scale cooperation (in the “lawful” sense, not the “good” sense)
There’s probably some effect where humans capable of large-scale cooperation also tend to be more benevolent. But you still see lots of historical examples of empires (big human groups) treating small human groups very badly. (My understanding is that small human groups treat each other badly as well, but we hear about it less because such small-scale conflicts are less interesting and don’t hang as well on grand historical narratives.)
If by “spreading cooperative agents” you mean “spreading lawfulness”, I’m not immediately seeing how that’s helpful. My prior is that the group that’s made up of lawful people is already going to be the one that wins, since lawfulness enables large-scale cooperation and thus power. Perhaps spreading lawfulness could make conflicts more asymmetrical, by pitting a large group of lawful individuals against a small group of less lawful ones. In an asymmetrical conflict, the powerful group has the luxury of subduing the much less powerful group in a way that’s relatively benevolent. A symmetrical conflict is more likely to be a highly destructive fight to the death. Powerful groups also have stronger deterrence capabilities, which disincentivizes conflict in the first place. So this could be an argument for spreading lawfulness.
Spreading lawfulness within the EA movement seems like a really good thing to me. More lawfulness will allow us to cooperate at a larger scale and be a more influential group. Unfortunately, utilitarian thinking tends to have a strong “chaotic good” flavor, and utilitarian thought experiments often pit our harm-minimization instincts against deontological rules that underpin large-scale cooperation. This is part of why I spent a lot of time arguing in this thread and elsewhere that EA should have a stronger central governance mechanism.
BTW, a lot of this thinking came out of thesediscussions with Brian Tomasik.
“Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings.”
I’m pretty sure this is false. Superintelligent singletons that don’t specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They’ll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they’ll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.
If you disagree, I’d love to hear that you do—because I’m thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.
The ways that I envision suffering potentially happening in the future are these:
—People deciding that obeying the law and respecting the sovereignty of other nations is more important than preventing the suffering of people inside them
—People deciding that doing scientific research (simulations are an example of this) is well worth the suffering of the people and animals experimented on
—People deciding that the insults and microagressions that affect some groups are not as bad as the inefficiencies that come from preventing them
—People deciding that it’s better to have a few lives without suffering than many many many lives with suffering (even when the many lives are all still all things considered good.)
—People deciding that AI systems should be designed in ways that make them suffer in their daily jobs, because it’s most efficient that way.
Utilitarianism comes down pretty strongly in favor of these decisions, at least in many cases. My guess is that in post-scarcity conditions, ordinary people will be more inclined to resist these decisions than utilitarians. The big exception is the sovereignty thing; in those cases I think utilitarians will lead to less suffering than the average humans. But those cases will only happen for a decade or so and will be relatively small-scale.
I think wild animal suffering isn’t a long-term issue except in scenarios where we go extinct for non-AGI-related reasons. The three likeliest scenarios are:
Humans leverage AGI-related technologies in a way that promotes human welfare as well as (non-human) animal welfare.
Humans leverage AGI-related technologies in a way that promotes human welfare and is effectively indifferent to animal welfare.
Humans accidentally use AGI-related technologies in a way that is indifferent to human and animal welfare.
In all three scenarios, the decision-makers are likely to have “ambitious” goals that favor seizing more and more resources. In scenario 2, efficient resource use almost certainly implies that biological human bodies and brains get switched out for computing hardware running humans, and that wild animals are replaced with more computing hardware, energy/cooling infrastructure, etc. Even if biological humans who need food stick around for some reason, it’s unlikely that the optimal way to efficiently grow food in the long run will be “grow entire animals, wasting lots of energy on processes that don’t directly increase the quantity or quality of the food transmitted to humans”.
In scenario 1, wild animals might be euthanized, or uploaded to a substrate where they can live whatever number of high-quality lives seems best. This is by far the best scenario, especially for people who think (actual or potential) non-human animals might have at least some experiences that are of positive value, or at least some positive preferences that are worth fulfilling. I would consider this extremely likely if non-human animals are moral patients at all, though scenario 1 is also strongly preferable if we’re uncertain about this question and want to hedge our bets.
Scenario 3 has the same impact on wild animals as scenario 1, and for analogous reasons: resource limitations make it costly to keep wild animals around. 3 is much worse than 1 because human welfare matters so much; even if the average present-day human life turned out to be net-negative, this would be a contingent fact that could be addressed by improving global welfare.
I consider scenario 2 much less likely than scenarios 1 and 3; my point in highlighting it is to note that scenario 2 is similarly good for the purpose of preventing wild animal suffering. I also consider scenario 2 vastly more likely than “sadistic” scenarios where some agent is exerting deliberate effort to produce more suffering in the world, for non-instrumental reasons.
What’s your probability that wild-animal suffering will be created in (instrumentally useful or intrinsically valued) simulations?
[Reinforcing Alice for giving more attention to this consideration despite the fact that it’s unpleasant for her]
[speculative]
What is meant by “cooperative agents”? Personally, I suspect “cooperativeness” is best split into multiple dimensions, analogous to lawful/chaotic and good/evil in a roleplaying game. My sense is that
humanity is made up of competing groups
bigger groups tend to be more powerful
groups get big because they are made up of humans who are capable of large-scale cooperation (in the “lawful” sense, not the “good” sense)
There’s probably some effect where humans capable of large-scale cooperation also tend to be more benevolent. But you still see lots of historical examples of empires (big human groups) treating small human groups very badly. (My understanding is that small human groups treat each other badly as well, but we hear about it less because such small-scale conflicts are less interesting and don’t hang as well on grand historical narratives.)
If by “spreading cooperative agents” you mean “spreading lawfulness”, I’m not immediately seeing how that’s helpful. My prior is that the group that’s made up of lawful people is already going to be the one that wins, since lawfulness enables large-scale cooperation and thus power. Perhaps spreading lawfulness could make conflicts more asymmetrical, by pitting a large group of lawful individuals against a small group of less lawful ones. In an asymmetrical conflict, the powerful group has the luxury of subduing the much less powerful group in a way that’s relatively benevolent. A symmetrical conflict is more likely to be a highly destructive fight to the death. Powerful groups also have stronger deterrence capabilities, which disincentivizes conflict in the first place. So this could be an argument for spreading lawfulness.
Spreading lawfulness within the EA movement seems like a really good thing to me. More lawfulness will allow us to cooperate at a larger scale and be a more influential group. Unfortunately, utilitarian thinking tends to have a strong “chaotic good” flavor, and utilitarian thought experiments often pit our harm-minimization instincts against deontological rules that underpin large-scale cooperation. This is part of why I spent a lot of time arguing in this thread and elsewhere that EA should have a stronger central governance mechanism.
BTW, a lot of this thinking came out of these discussions with Brian Tomasik.
Thanks for the great post!
Regarding a “MIRI2,” at the Foundational Research Institute our goal is to research strategies for avoiding dystopian futures containing large amounts of suffering. We think that paperclip maximizers would create a lot of suffering.
I think there are good arguments against value-spreading.
“Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings.”
I’m pretty sure this is false. Superintelligent singletons that don’t specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They’ll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they’ll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.
If you disagree, I’d love to hear that you do—because I’m thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.
And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.
Can you elaborate on this?
Sure, sorry for the delay.
The ways that I envision suffering potentially happening in the future are these: —People deciding that obeying the law and respecting the sovereignty of other nations is more important than preventing the suffering of people inside them —People deciding that doing scientific research (simulations are an example of this) is well worth the suffering of the people and animals experimented on —People deciding that the insults and microagressions that affect some groups are not as bad as the inefficiencies that come from preventing them —People deciding that it’s better to have a few lives without suffering than many many many lives with suffering (even when the many lives are all still all things considered good.) —People deciding that AI systems should be designed in ways that make them suffer in their daily jobs, because it’s most efficient that way.
Utilitarianism comes down pretty strongly in favor of these decisions, at least in many cases. My guess is that in post-scarcity conditions, ordinary people will be more inclined to resist these decisions than utilitarians. The big exception is the sovereignty thing; in those cases I think utilitarians will lead to less suffering than the average humans. But those cases will only happen for a decade or so and will be relatively small-scale.