If it is indeed possible to modify animal minds to such an extent that we would be 100% certain that previously displeasing experiences are now blissful, then couldn’t we extend this logic and “solve” every single problem? Like, making starvation and poverty and disease and extinction blissful as well?
I feel there are crucial moral and practical (e.g., 2nd order effects) considerations to account for here.
I think that a human being in a constant blissful state might endanger someone’s existence or make them non-functional, which isn’t much of an issue for a farm animal.
However, I do think that we can use genetic engineering to make people happier, healthier, and more intelligent than anyone who has ever lived. I think these would have positive network effects to general society—making people more altruistic and society better functioning. I think we’re quite close to achieving this and that EAs could take deliberate efforts to accelerate this process. My argument was not well-received by the EA forum, I think largely because the idea is controversial and some of the researchers I cited were controvesial (https://forum.effectivealtruism.org/posts/gaSHkEf3SnKhcSPt2/the-effective-altruist-case-for-using-genetic-enhancement-to).
The second-order considerations are definitely a problem once there is more widespread adoption. If only 0.001% of the population is using genetic enhancement, there are very little in the way of collective action problems. If a sizeable portion is using this technology, then we run into game theory. This is the topic of Jonathan Anomaly’s “Creating Future People”—which released a second edition yesterday. I will be reviewing it on my blog (most likely) rather soon. I am not sure if the EA forum would consider it relevant.
I think that a human being in a constant blissful state might endanger someone’s existence or make them non-functional
But if pure suffering elimination was the only thing that mattered, no one would be endangered, right? I am guessing there are some other factors you account for when valuing human lives?
which isn’t much of an issue for a farm animal.
I suspect we share very different ethical intuitions about the intrinsic value of non-human lives.
But even from an amoral perspective, this would be an issue because if a substantial number of engineered chickens pecked each other to death (which happens even now), it would reduce profitability and uptake of this method.
The second-order considerations are definitely a problem once there is more widespread adoption. If only 0.001% of the population is using genetic enhancement, there are very little in the way of collective action problems.
I partially agree, but even a couple of malevolent actor who enhance themselves considerably could cause large amounts of trouble. See this section of Reducing long-term risks from malevolent actors.
If it is indeed possible to modify animal minds to such an extent that we would be 100% certain that previously displeasing experiences are now blissful, then couldn’t we extend this logic and “solve” every single problem? Like, making starvation and poverty and disease and extinction blissful as well?
I feel there are crucial moral and practical (e.g., 2nd order effects) considerations to account for here.
Good question.
I think that a human being in a constant blissful state might endanger someone’s existence or make them non-functional, which isn’t much of an issue for a farm animal.
However, I do think that we can use genetic engineering to make people happier, healthier, and more intelligent than anyone who has ever lived. I think these would have positive network effects to general society—making people more altruistic and society better functioning. I think we’re quite close to achieving this and that EAs could take deliberate efforts to accelerate this process. My argument was not well-received by the EA forum, I think largely because the idea is controversial and some of the researchers I cited were controvesial (https://forum.effectivealtruism.org/posts/gaSHkEf3SnKhcSPt2/the-effective-altruist-case-for-using-genetic-enhancement-to).
The second-order considerations are definitely a problem once there is more widespread adoption. If only 0.001% of the population is using genetic enhancement, there are very little in the way of collective action problems. If a sizeable portion is using this technology, then we run into game theory. This is the topic of Jonathan Anomaly’s “Creating Future People”—which released a second edition yesterday. I will be reviewing it on my blog (most likely) rather soon. I am not sure if the EA forum would consider it relevant.
But if pure suffering elimination was the only thing that mattered, no one would be endangered, right? I am guessing there are some other factors you account for when valuing human lives?
I suspect we share very different ethical intuitions about the intrinsic value of non-human lives.
But even from an amoral perspective, this would be an issue because if a substantial number of engineered chickens pecked each other to death (which happens even now), it would reduce profitability and uptake of this method.
I partially agree, but even a couple of malevolent actor who enhance themselves considerably could cause large amounts of trouble. See this section of Reducing long-term risks from malevolent actors.