Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.
I think there’s recently more skepticism about cultured meat (see here, although I still expect factory farming to be phased out eventually, regardless), but either way, it’s not clear a similar argument would work for artificial sentience, used as tools, used in simulations or even intentionally tortured. There’s also some risk that nonhuman animals themselves will be used in space colonization, but that may not be where most of the risk is.
Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.
It seems unlikely to me that we would go extinct, even conditional on “us” deciding it would be best. Who are “we”? There will probably be very divergent views (especially after space colonization, within and between colonies, and these colonies may be spatially distant and self-sufficient, so influencing them becomes much more difficult). You would need to get a sufficiently large coalition to agree and force the rest to go extinct, but both are unlikely, even conditional on “our” judgement that extinction would be better, and actively attempting to force groups into extinction may itself be an s-risk. In this way, an option value argument may go the other way, too: once TAI is here in a scenario with multiple powers or space colonization goes sufficiently far, going extinct effectively stops being an option.
I’m not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it’s certainly an important consideration.
It seems unlikely to me that we would go extinct, even conditional on “us” deciding it would be best.
I think there’s recently more skepticism about cultured meat (see here, although I still expect factory farming to be phased out eventually, regardless), but either way, it’s not clear a similar argument would work for artificial sentience, used as tools, used in simulations or even intentionally tortured. There’s also some risk that nonhuman animals themselves will be used in space colonization, but that may not be where most of the risk is.
It seems unlikely to me that we would go extinct, even conditional on “us” deciding it would be best. Who are “we”? There will probably be very divergent views (especially after space colonization, within and between colonies, and these colonies may be spatially distant and self-sufficient, so influencing them becomes much more difficult). You would need to get a sufficiently large coalition to agree and force the rest to go extinct, but both are unlikely, even conditional on “our” judgement that extinction would be better, and actively attempting to force groups into extinction may itself be an s-risk. In this way, an option value argument may go the other way, too: once TAI is here in a scenario with multiple powers or space colonization goes sufficiently far, going extinct effectively stops being an option.
I’m not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it’s certainly an important consideration.
This is a fair point to be honest!