One objection here is that improving socioeconomic conditions can also broadly improve people’s values. Generally speaking, increasing wealth and security promotes self-expression values, which correspond decently well to having a wide moral circle. So there’s less general reason to single out moral issues like animal welfare as being a comparatively higher priority.
However, improving socioeconomic conditions also accelerates the date at which technological s-risks will present themselves. So in some cases, we are looking for differential moral progress. So this tells me to increase the weight of animal welfare for the long run. (It’s overall slightly higher now than before.)
Good points. I think it’s also important where these improvements (socioeconomic or moral) are happening in the world, although I’m not sure in which way. How much effect does further improvements in socioeconomic conditions in the US and China have on emerging tech and values in those countries compared to other countries?
Emerging tech is treated as an x-risk here, so s-risks from tech should be considered separately. In terms of determining weights and priorities I would sooner lump s-risks into growth and progress than into x-risks.
FWIW, s-risks are usually considered a type of x-risk, and generally involve new technologies (artificial sentience, AI).
I don’t see climate change policy as promoting better moral values. Sure, better moral values can imply better climate change policy, but that doesn’t mean there’s a link the other way. One of the reasons animal welfare uniquely matters here is that we think there is a specific phenomenon where people care less about animals in order to justify their meat consumption.
Well, that’s been observed in studies on attitudes towards animals and meat consumption, but I think similar phenomena could be plausible for climate change. Action on climate change may affect people’s standards of living, and concern for future generations competes with concern for yourself.
I also don’t see reducing cognitive dissonance or rationalization as the only way farm animal welfare improves values. One is just more attention to and discussion of the issue, and another could be that identifying with or looking up to people (the president, the party, the country) who care about animal welfare increases concern for animals. Possibly something similar could be the case for climate change and future generations.
Good points. I think it’s also important where these improvements (socioeconomic or moral) are happening in the world, although I’m not sure in which way. How much effect does further improvements in socioeconomic conditions in the US and China have on emerging tech and values in those countries compared to other countries?
FWIW, s-risks are usually considered a type of x-risk, and generally involve new technologies (artificial sentience, AI).
Well, that’s been observed in studies on attitudes towards animals and meat consumption, but I think similar phenomena could be plausible for climate change. Action on climate change may affect people’s standards of living, and concern for future generations competes with concern for yourself.
I also don’t see reducing cognitive dissonance or rationalization as the only way farm animal welfare improves values. One is just more attention to and discussion of the issue, and another could be that identifying with or looking up to people (the president, the party, the country) who care about animal welfare increases concern for animals. Possibly something similar could be the case for climate change and future generations.