(2) People’s “values” will be more in control than ever before
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it’s technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
Hmm this just feels a bit hopeful to me. We may well move into this attractor state, but what if we lock-in suffering (not necessarily forever maybe just for a long time) before this point? The following paragraphs from the Center for Reducing Suffering’s page on S-risks concern me:
Crucially, factory farming is the result of economic incentives and technological feasibility, not of human malice or bad intentions. Most humans don’t approve of animal suffering per se – getting tasty food incidentally happens to involve animal suffering.4 In other words, technological capacity plus indifference is already enough to cause unimaginable amounts of suffering. This should make us mindful of the possibility that future technologies might lead to a similar moral catastrophe.
...
Comparable to how large numbers of nonhuman animals were created because it was economically expedient, it is conceivable that large numbers of artificial minds will be created in the future. They will likely enjoy various advantages over biological minds, which will make them economically useful. This combination of large numbers of sentient minds and foreseeable lack of moral consideration presents a severe s-risk. In fact, these conditions look strikingly similar to those of factory farming.
Overall I’m worried our values may not improve as fast our technology.
Thanks for this thoughtful response.
Hmm this just feels a bit hopeful to me. We may well move into this attractor state, but what if we lock-in suffering (not necessarily forever maybe just for a long time) before this point? The following paragraphs from the Center for Reducing Suffering’s page on S-risks concern me:
...
Overall I’m worried our values may not improve as fast our technology.