Future artificial sentiences would be involved in the s-risks EAF is concerned with [1][2] and possibly astronomical waste scenarios under a view similar to total utilitarianism. (See also āThe expected value of extinction risk reduction is positiveā by Jan M. Brauner and Friederike M. Grosse-Holz.) You may have already implicitly included these in some of your assessments of emerging tech (AI), but less so in issues that may involve shifting the moral values of society or in the less direct impacts of moral values on emerging tech. If AGI does not properly care for artificial sentiences because humans generally, policymakers or their designers didnāt care, this could be astronomically bad. That being said, near-misses could also be astronomically bad.
I think all policy driven in part by or promoting impartial concern for welfare may contribute to concern for artificial sentience, and just having a president who is more impartial and generally concerned with welfare might, too. Similarly, policy driven by or promoting better values (moral or for rationality) and a president with better values generally may be good for the long-term future.
Better policies for and views on farmed animals seem like they would achieve the best progress towards the moral inclusion of artificial sentience, among issues considered in CSS. They would also drive the most concern for wild animals, too, of course.
More concern for climate change could also translate into more concern for future generations generally, although Iām less confident that this would extend to the far future. It could also drive concern for nonsentient entities at the expense of sentient individuals (wild animals, mostly).
My suggestion would be that
the long-term weight of several issues be increased because of potential long-term impacts of better or worse values,
the emerging tech issue should consider the effects of values, and/āor
a separate issue be made for moral values.
You might have to be careful to avoid double (or triple!) counting.
I previously included wild animal suffering in the long run weight of animal welfare. Having looked at some of these links and reconsidering, I think I was over-weighting animal welfareās impact on wild animal suffering.
One objection here is that improving socioeconomic conditions can also broadly improve peopleās values. Generally speaking, increasing wealth and security promotes self-expression values, which correspond decently well to having a wide moral circle. So thereās less general reason to single out moral issues like animal welfare as being a comparatively higher priority.
However, improving socioeconomic conditions also accelerates the date at which technological s-risks will present themselves. So in some cases, we are looking for differential moral progress. So this tells me to increase the weight of animal welfare for the long run. (Itās overall slightly higher now than before.)
Another objection: a lot of what we perceive as pure moral concern vs apathy in governance could really be understood as a different tradeoff of freedom versus government control. Itās straightforward in the case of animal farming or climate change that the people who believe in a powerful regulatory state are doing good whereas the small-government libertarians are doing harm. But Iām not sure that this will apply generally in the future.
Emerging tech is treated as an x-risk here, so s-risks from tech should be considered separately. In terms of determining weights and priorities I would sooner lump s-risks into growth and progress than into x-risks.
I donāt see climate change policy as promoting better moral values. Sure, better moral values can imply better climate change policy, but that doesnāt mean thereās a link the other way. One of the reasons animal welfare uniquely matters here is that we think there is a specific phenomenon where people care less about animals in order to justify their meat consumption.
At the moment I canāt think of other specific changes to make but I will keep it in mind and maybe hit upon something else.
One objection here is that improving socioeconomic conditions can also broadly improve peopleās values. Generally speaking, increasing wealth and security promotes self-expression values, which correspond decently well to having a wide moral circle. So thereās less general reason to single out moral issues like animal welfare as being a comparatively higher priority.
However, improving socioeconomic conditions also accelerates the date at which technological s-risks will present themselves. So in some cases, we are looking for differential moral progress. So this tells me to increase the weight of animal welfare for the long run. (Itās overall slightly higher now than before.)
Good points. I think itās also important where these improvements (socioeconomic or moral) are happening in the world, although Iām not sure in which way. How much effect does further improvements in socioeconomic conditions in the US and China have on emerging tech and values in those countries compared to other countries?
Emerging tech is treated as an x-risk here, so s-risks from tech should be considered separately. In terms of determining weights and priorities I would sooner lump s-risks into growth and progress than into x-risks.
FWIW, s-risks are usually considered a type of x-risk, and generally involve new technologies (artificial sentience, AI).
I donāt see climate change policy as promoting better moral values. Sure, better moral values can imply better climate change policy, but that doesnāt mean thereās a link the other way. One of the reasons animal welfare uniquely matters here is that we think there is a specific phenomenon where people care less about animals in order to justify their meat consumption.
Well, thatās been observed in studies on attitudes towards animals and meat consumption, but I think similar phenomena could be plausible for climate change. Action on climate change may affect peopleās standards of living, and concern for future generations competes with concern for yourself.
I also donāt see reducing cognitive dissonance or rationalization as the only way farm animal welfare improves values. One is just more attention to and discussion of the issue, and another could be that identifying with or looking up to people (the president, the party, the country) who care about animal welfare increases concern for animals. Possibly something similar could be the case for climate change and future generations.
Future artificial sentiences would be involved in the s-risks EAF is concerned with [1][2] and possibly astronomical waste scenarios under a view similar to total utilitarianism. (See also āThe expected value of extinction risk reduction is positiveā by Jan M. Brauner and Friederike M. Grosse-Holz.) You may have already implicitly included these in some of your assessments of emerging tech (AI), but less so in issues that may involve shifting the moral values of society or in the less direct impacts of moral values on emerging tech. If AGI does not properly care for artificial sentiences because humans generally, policymakers or their designers didnāt care, this could be astronomically bad. That being said, near-misses could also be astronomically bad.
I think all policy driven in part by or promoting impartial concern for welfare may contribute to concern for artificial sentience, and just having a president who is more impartial and generally concerned with welfare might, too. Similarly, policy driven by or promoting better values (moral or for rationality) and a president with better values generally may be good for the long-term future.
Better policies for and views on farmed animals seem like they would achieve the best progress towards the moral inclusion of artificial sentience, among issues considered in CSS. They would also drive the most concern for wild animals, too, of course.
More concern for climate change could also translate into more concern for future generations generally, although Iām less confident that this would extend to the far future. It could also drive concern for nonsentient entities at the expense of sentient individuals (wild animals, mostly).
My suggestion would be that
the long-term weight of several issues be increased because of potential long-term impacts of better or worse values,
the emerging tech issue should consider the effects of values, and/āor
a separate issue be made for moral values.
You might have to be careful to avoid double (or triple!) counting.
See:
āWhy I prioritize moral circle expansion over artificial intelligence alignmentā by Jacy Reese
The post and comments on āDoes improving animal rights now improve the far future?ā by Evira.
āArguments for and against moral advocacyā by Tobias Baumann (also on FRIās website)
āAgainst moral advocacyā by Paul Christiano
āHow tractable is changing the course of history?ā by Jamie Harris (especially the section How tractable are trajectory changes towards moral circle expansion?)
I previously included wild animal suffering in the long run weight of animal welfare. Having looked at some of these links and reconsidering, I think I was over-weighting animal welfareās impact on wild animal suffering.
One objection here is that improving socioeconomic conditions can also broadly improve peopleās values. Generally speaking, increasing wealth and security promotes self-expression values, which correspond decently well to having a wide moral circle. So thereās less general reason to single out moral issues like animal welfare as being a comparatively higher priority.
However, improving socioeconomic conditions also accelerates the date at which technological s-risks will present themselves. So in some cases, we are looking for differential moral progress. So this tells me to increase the weight of animal welfare for the long run. (Itās overall slightly higher now than before.)
Another objection: a lot of what we perceive as pure moral concern vs apathy in governance could really be understood as a different tradeoff of freedom versus government control. Itās straightforward in the case of animal farming or climate change that the people who believe in a powerful regulatory state are doing good whereas the small-government libertarians are doing harm. But Iām not sure that this will apply generally in the future.
Emerging tech is treated as an x-risk here, so s-risks from tech should be considered separately. In terms of determining weights and priorities I would sooner lump s-risks into growth and progress than into x-risks.
I donāt see climate change policy as promoting better moral values. Sure, better moral values can imply better climate change policy, but that doesnāt mean thereās a link the other way. One of the reasons animal welfare uniquely matters here is that we think there is a specific phenomenon where people care less about animals in order to justify their meat consumption.
At the moment I canāt think of other specific changes to make but I will keep it in mind and maybe hit upon something else.
Good points. I think itās also important where these improvements (socioeconomic or moral) are happening in the world, although Iām not sure in which way. How much effect does further improvements in socioeconomic conditions in the US and China have on emerging tech and values in those countries compared to other countries?
FWIW, s-risks are usually considered a type of x-risk, and generally involve new technologies (artificial sentience, AI).
Well, thatās been observed in studies on attitudes towards animals and meat consumption, but I think similar phenomena could be plausible for climate change. Action on climate change may affect peopleās standards of living, and concern for future generations competes with concern for yourself.
I also donāt see reducing cognitive dissonance or rationalization as the only way farm animal welfare improves values. One is just more attention to and discussion of the issue, and another could be that identifying with or looking up to people (the president, the party, the country) who care about animal welfare increases concern for animals. Possibly something similar could be the case for climate change and future generations.