Animal Liberation by Peter Singer was published in 1975, just 50 years ago. Wild animal suffering as a moral concern gained traction in effective altruism just 10-20 years ago. Moral ideas and social movements often take a long time to go from conception to general acceptance. For example, in the U.S., 65 years passed between the founding of the Mattachine Society, one of the earliest gay rights groups, in 1950 and the Supreme Court decision in 2015 that gave gay people the right to marry nation-wide.
Given this, why would you consider it 90% likely that in 100 years, in 1000 years, or in 10,000 years, people wouldnât change their minds about wild animal suffering? Especially given that, on these timescales, I think youâre also willing to entertain that there may be radical technological/âbiological changes to many or most human beings, such as cognitive enhancing neurotech, biotechnological self-modification of the brain, potentially merging with AGI, and so on.
First off, I must sayâI really like that answer.
I guess Iâm concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agreeâI shouldnât giv it a 90% likelihood.
Personally, Iâve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress?
It seems like an awkward relic of the âMIRI worldviewâ, which I donât think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think âvalue lock-inâ is a real thing that would ever happen? Only if we make certain peculiar and, in my opinion, dubious assumptions about the nature of AGI.
When you say you canât imagine a majority of people caring about wild animal suffering, does this mean you can imagine what society will be like in 1000 or 10,000 years? Or even beyond that? I think this is case where my philosophical hero Daniel Dennettâs admonishment is appropriate: donât mistake a failure of imagination for a matter of necessity. Peopleâs moral views have changed radically within the last 500 years â on topics like slavery, children, gender, violence, retribution, punishment, animals, race, nationalism, and more â let alone the last 1000 or 10,000.
I am an optimist in the David Deutsch sense. I think, given certain conditions in human society (e.g. science, liberal democracy, universal education, the prevalence of what might be called Enlightenment values), there is a tendency toward better ideas over time. Moral progress is not a complete accident.
How did you come to your view that wild animal suffering is important? Why would that process not be repeated on a large scale within the next 1000 or 10,000 years? Especially if per capita gross world product is going to increase to millions of dollars and peopleâs level of education is going to go way up.
Animal Liberation by Peter Singer was published in 1975, just 50 years ago. Wild animal suffering as a moral concern gained traction in effective altruism just 10-20 years ago. Moral ideas and social movements often take a long time to go from conception to general acceptance. For example, in the U.S., 65 years passed between the founding of the Mattachine Society, one of the earliest gay rights groups, in 1950 and the Supreme Court decision in 2015 that gave gay people the right to marry nation-wide.
Given this, why would you consider it 90% likely that in 100 years, in 1000 years, or in 10,000 years, people wouldnât change their minds about wild animal suffering? Especially given that, on these timescales, I think youâre also willing to entertain that there may be radical technological/âbiological changes to many or most human beings, such as cognitive enhancing neurotech, biotechnological self-modification of the brain, potentially merging with AGI, and so on.
First off, I must sayâI really like that answer.
I guess Iâm concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agreeâI shouldnât giv it a 90% likelihood.
Personally, Iâve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress?
It seems like an awkward relic of the âMIRI worldviewâ, which I donât think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think âvalue lock-inâ is a real thing that would ever happen? Only if we make certain peculiar and, in my opinion, dubious assumptions about the nature of AGI.
When you say you canât imagine a majority of people caring about wild animal suffering, does this mean you can imagine what society will be like in 1000 or 10,000 years? Or even beyond that? I think this is case where my philosophical hero Daniel Dennettâs admonishment is appropriate: donât mistake a failure of imagination for a matter of necessity. Peopleâs moral views have changed radically within the last 500 years â on topics like slavery, children, gender, violence, retribution, punishment, animals, race, nationalism, and more â let alone the last 1000 or 10,000.
I am an optimist in the David Deutsch sense. I think, given certain conditions in human society (e.g. science, liberal democracy, universal education, the prevalence of what might be called Enlightenment values), there is a tendency toward better ideas over time. Moral progress is not a complete accident.
How did you come to your view that wild animal suffering is important? Why would that process not be repeated on a large scale within the next 1000 or 10,000 years? Especially if per capita gross world product is going to increase to millions of dollars and peopleâs level of education is going to go way up.