Personally, Iāve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress?
It seems like an awkward relic of the āMIRI worldviewā, which I donāt think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think āvalue lock-inā is a real thing that would ever happen? Only if we make certain peculiar and, in my opinion, dubious assumptions about the nature of AGI.
When you say you canāt imagine a majority of people caring about wild animal suffering, does this mean you can imagine what society will be like in 1000 or 10,000 years? Or even beyond that? I think this is case where my philosophical hero Daniel Dennettās admonishment is appropriate: donāt mistake a failure of imagination for a matter of necessity. Peopleās moral views have changed radically within the last 500 years ā on topics like slavery, children, gender, violence, retribution, punishment, animals, race, nationalism, and more ā let alone the last 1000 or 10,000.
I am an optimist in the David Deutsch sense. I think, given certain conditions in human society (e.g. science, liberal democracy, universal education, the prevalence of what might be called Enlightenment values), there is a tendency toward better ideas over time. Moral progress is not a complete accident.
How did you come to your view that wild animal suffering is important? Why would that process not be repeated on a large scale within the next 1000 or 10,000 years? Especially if per capita gross world product is going to increase to millions of dollars and peopleās level of education is going to go way up.
Personally, Iāve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress?
It seems like an awkward relic of the āMIRI worldviewā, which I donāt think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think āvalue lock-inā is a real thing that would ever happen? Only if we make certain peculiar and, in my opinion, dubious assumptions about the nature of AGI.
When you say you canāt imagine a majority of people caring about wild animal suffering, does this mean you can imagine what society will be like in 1000 or 10,000 years? Or even beyond that? I think this is case where my philosophical hero Daniel Dennettās admonishment is appropriate: donāt mistake a failure of imagination for a matter of necessity. Peopleās moral views have changed radically within the last 500 years ā on topics like slavery, children, gender, violence, retribution, punishment, animals, race, nationalism, and more ā let alone the last 1000 or 10,000.
I am an optimist in the David Deutsch sense. I think, given certain conditions in human society (e.g. science, liberal democracy, universal education, the prevalence of what might be called Enlightenment values), there is a tendency toward better ideas over time. Moral progress is not a complete accident.
How did you come to your view that wild animal suffering is important? Why would that process not be repeated on a large scale within the next 1000 or 10,000 years? Especially if per capita gross world product is going to increase to millions of dollars and peopleās level of education is going to go way up.