The decision theory argument isn’t just about ability to retaliate—it’s about ability to engage in reciprocal decision-making and honor agreements. Most animals can’t make or understand explicit agreements or intentionally coordinate based on understanding others’ choices. Maybe some corvids and a very few other nonhuman animals can try to imagine our perspectives and take actions based on predictions of what we’re likely to decide, on levels of abstraction that might give us some basis for ongoing noninstrumentalizing cooperation.
This matters more in our current context because:
We’re relatively early in cosmic time, with vast potential ahead
Our capacity for effective coordination and decision-making is precarious and needs strengthening
Given those facts, our priority ought to be preserving and improving our ability to make good individual and collective decisions. While animal welfare matters, compromising human coordination capacity to address it would be counterproductive—we need better coordination to address any large-scale welfare concerns effectively.
Humans are fundamentally an instrumentalizing species—that’s how we solve problems. Animals suffer in factory farms not because we instrumentalize them, but because our capacity for instrumental reasoning is being turned against itself through broken coordination systems. Trying to fix animal suffering without addressing this underlying coordination failure seems like palliative care for a dying civilization.
If you are interested in cooperating with nonhuman animals—say, on the theory that cognitive diversity enables more gains from trade—it would make more sense trying to figure out how to trade more equitably and profitably with whales or corvids, than treating chickens as counterparties in a negotiation.
There are some forms of agreements you can make with animals and there are some forms you cannot. I don’t see why they can’t intentionally coordinate based on understanding of our choices. A cow or a crow might move closer to someone giving them food and act kindly towards them later on, but they will refuse to move closer and cooperate if they realise that person has a history of deception.
There are also possible worlds in which animals’ intelligence can be enhanced even further. It could even happen during our lifetimes given a technology explosion. In those possible worlds animals will be able to meet any threshold you want them to pass.
I really struggle to see a consistent way to be respectful towards people in coma or babies without also respecting the animals. You need a very specific argument on why both of these are true:
Being uncooperative to animals is fine even though they might become agents(according to your threshold) with some additional technology.
Being uncooperative to babies is not fine because many of them will become agents in future.
I believe the only consistent way to disregard animal interests is to deny that animals have interests at all as Yudkowsky does. As long as animals have interests it’s very difficult to explain why screwing them over won’t send a signal of “I might screw over others if I can get away with it”.
The decision theory argument isn’t just about ability to retaliate—it’s about ability to engage in reciprocal decision-making and honor agreements. Most animals can’t make or understand explicit agreements or intentionally coordinate based on understanding others’ choices. Maybe some corvids and a very few other nonhuman animals can try to imagine our perspectives and take actions based on predictions of what we’re likely to decide, on levels of abstraction that might give us some basis for ongoing noninstrumentalizing cooperation.
This matters more in our current context because:
We’re relatively early in cosmic time, with vast potential ahead
Our capacity for effective coordination and decision-making is precarious and needs strengthening
Given those facts, our priority ought to be preserving and improving our ability to make good individual and collective decisions. While animal welfare matters, compromising human coordination capacity to address it would be counterproductive—we need better coordination to address any large-scale welfare concerns effectively.
Humans are fundamentally an instrumentalizing species—that’s how we solve problems. Animals suffer in factory farms not because we instrumentalize them, but because our capacity for instrumental reasoning is being turned against itself through broken coordination systems. Trying to fix animal suffering without addressing this underlying coordination failure seems like palliative care for a dying civilization.
If you are interested in cooperating with nonhuman animals—say, on the theory that cognitive diversity enables more gains from trade—it would make more sense trying to figure out how to trade more equitably and profitably with whales or corvids, than treating chickens as counterparties in a negotiation.
There are some forms of agreements you can make with animals and there are some forms you cannot. I don’t see why they can’t intentionally coordinate based on understanding of our choices. A cow or a crow might move closer to someone giving them food and act kindly towards them later on, but they will refuse to move closer and cooperate if they realise that person has a history of deception.
There are also possible worlds in which animals’ intelligence can be enhanced even further. It could even happen during our lifetimes given a technology explosion. In those possible worlds animals will be able to meet any threshold you want them to pass.
I really struggle to see a consistent way to be respectful towards people in coma or babies without also respecting the animals. You need a very specific argument on why both of these are true:
Being uncooperative to animals is fine even though they might become agents(according to your threshold) with some additional technology.
Being uncooperative to babies is not fine because many of them will become agents in future.
I believe the only consistent way to disregard animal interests is to deny that animals have interests at all as Yudkowsky does. As long as animals have interests it’s very difficult to explain why screwing them over won’t send a signal of “I might screw over others if I can get away with it”.