They seem relevant because willpower and attention budgets are limited, and our altruism-directed activities (and habits, etc.) draw from those budgets.
One of the most important in my opinion is that you can influence others to change their diet and generally spread concern for animals and expand our moral circle. We need a society that stops seeing animals as objects to reduce the chances of s-risks, where vast amounts of suffering are locked in. How can we care about digital sentience when we don’t even care about cows?
I concede that this argument goes through probabilistically, but I feel like people overestimate its effect.
Almost none of the non-vegetarian EAs would want to lock in animal suffering for the long-term future, so the argument that personal veg*ism makes a difference on s-risks is a bit conjunctive. It seems to rely on the hidden premise that humans will attain control over the future, but EA values will die out or only have a negligible effect. That’s possible, but it doesn’t rank among the scenarios I’d consider likely.
I think the trajectory of civilization will gravitate toward one of two attractors:
(1) People’s “values” will become less and less relevant as Moloch dynamics accelerate
(2) People’s “values” will be more in control than ever before
If (1) happens, it doesn’t matter in the long run what people value today.
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it’s technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
You may say “but why assume compromise instead of war or value-assimilation where minority values die out?”
Okay, those are possibilities. But like I said, it makes the claim more conjunctive.
Also, there are some reasons to expect altruistic values to outcompete self-oriented ones. (Note that this blog post was written before Open Phil, before FTX, etc.) (Relatedly, we can see that, outside of EA, most people don’t seem to care or recognize how difficult it is for humans to attain control over the long-term future. )
Maybe we live in an unlucky world where some kind of AI-aided stable totalitarianism is easy to bring about (in the sense that it doesn’t require unusual degrees of organizational competence or individual rationality, but people can”stumble” into a series technological inventions that opens the door to it). Still, in that world, there are again some non-obvious steps from “slightly increasing the degree the average Westerner cares about nonhuman animals” to “preventing AI-aided dictatorship with bad values.” Spreading concern for nonhuman suffering likely has a positive effect here, but it looks unlikely to be very important compared to other interventions. Conditioning on that totalitarian lock-in scenario, it seems more directly useful to promote norms around personal integrity to prevent people with dictatorial tendencies from attaining positions of influence/power or working on AI governance.
I think there are s-risks we can tractably address, but I see the biggest risks around failure modes of transformative AI (technical problems rather than problems with people’s values).
Among interventions around moral circle expansion, I’m most optimistic about addressing risks of polarization – preventing that concern for the whole cause area becomes “something associated with the out-group,” something that people look down on for various reasons. (For instance, I didn’t like this presentation.) In my ideal scenario, all the non-veg*an EAs would often put in a good word for the intentions behind vegetarianism or veganism and emphasize agreement with the view that sentient minds deserve our care. (I largely see this happening already.)
(Arguably, personal veganism or vegetarianism are great ways to prevent concern for nonhumans from becoming “something associated with the out-group.” [Esp. if the people who go veg don’t promote their diets as an ideology in an off-putting fashion – otherwise it can backfire.] )
(2) People’s “values” will be more in control than ever before
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it’s technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
Hmm this just feels a bit hopeful to me. We may well move into this attractor state, but what if we lock-in suffering (not necessarily forever maybe just for a long time) before this point? The following paragraphs from the Center for Reducing Suffering’s page on S-risks concern me:
Crucially, factory farming is the result of economic incentives and technological feasibility, not of human malice or bad intentions. Most humans don’t approve of animal suffering per se – getting tasty food incidentally happens to involve animal suffering.4 In other words, technological capacity plus indifference is already enough to cause unimaginable amounts of suffering. This should make us mindful of the possibility that future technologies might lead to a similar moral catastrophe.
...
Comparable to how large numbers of nonhuman animals were created because it was economically expedient, it is conceivable that large numbers of artificial minds will be created in the future. They will likely enjoy various advantages over biological minds, which will make them economically useful. This combination of large numbers of sentient minds and foreseeable lack of moral consideration presents a severe s-risk. In fact, these conditions look strikingly similar to those of factory farming.
Overall I’m worried our values may not improve as fast our technology.
They seem relevant because willpower and attention budgets are limited, and our altruism-directed activities (and habits, etc.) draw from those budgets.
I concede that this argument goes through probabilistically, but I feel like people overestimate its effect.
Almost none of the non-vegetarian EAs would want to lock in animal suffering for the long-term future, so the argument that personal veg*ism makes a difference on s-risks is a bit conjunctive. It seems to rely on the hidden premise that humans will attain control over the future, but EA values will die out or only have a negligible effect. That’s possible, but it doesn’t rank among the scenarios I’d consider likely.
I think the trajectory of civilization will gravitate toward one of two attractors: (1) People’s “values” will become less and less relevant as Moloch dynamics accelerate (2) People’s “values” will be more in control than ever before
If (1) happens, it doesn’t matter in the long run what people value today.
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it’s technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
You may say “but why assume compromise instead of war or value-assimilation where minority values die out?”
Okay, those are possibilities. But like I said, it makes the claim more conjunctive.
Also, there are some reasons to expect altruistic values to outcompete self-oriented ones. (Note that this blog post was written before Open Phil, before FTX, etc.) (Relatedly, we can see that, outside of EA, most people don’t seem to care or recognize how difficult it is for humans to attain control over the long-term future. )
Maybe we live in an unlucky world where some kind of AI-aided stable totalitarianism is easy to bring about (in the sense that it doesn’t require unusual degrees of organizational competence or individual rationality, but people can”stumble” into a series technological inventions that opens the door to it). Still, in that world, there are again some non-obvious steps from “slightly increasing the degree the average Westerner cares about nonhuman animals” to “preventing AI-aided dictatorship with bad values.” Spreading concern for nonhuman suffering likely has a positive effect here, but it looks unlikely to be very important compared to other interventions. Conditioning on that totalitarian lock-in scenario, it seems more directly useful to promote norms around personal integrity to prevent people with dictatorial tendencies from attaining positions of influence/power or working on AI governance.
I think there are s-risks we can tractably address, but I see the biggest risks around failure modes of transformative AI (technical problems rather than problems with people’s values).
Among interventions around moral circle expansion, I’m most optimistic about addressing risks of polarization – preventing that concern for the whole cause area becomes “something associated with the out-group,” something that people look down on for various reasons. (For instance, I didn’t like this presentation.) In my ideal scenario, all the non-veg*an EAs would often put in a good word for the intentions behind vegetarianism or veganism and emphasize agreement with the view that sentient minds deserve our care. (I largely see this happening already.)
(Arguably, personal veganism or vegetarianism are great ways to prevent concern for nonhumans from becoming “something associated with the out-group.” [Esp. if the people who go veg don’t promote their diets as an ideology in an off-putting fashion – otherwise it can backfire.] )
Thanks for this thoughtful response.
Hmm this just feels a bit hopeful to me. We may well move into this attractor state, but what if we lock-in suffering (not necessarily forever maybe just for a long time) before this point? The following paragraphs from the Center for Reducing Suffering’s page on S-risks concern me:
...
Overall I’m worried our values may not improve as fast our technology.