As a suffering-focused ethicist who generally rejects moral aggregation across individuals (I am most sympathetic to painism), I have a higher bar for “AGI going well for humans” for humans than many others do; it’s not clear to me that previous technological advances went well for humans
Agricultural revolution’s “luxury trap”: going from hunting-gathering to farming allowed humans to consolidate unprecedented wealth and power, but at the cost of the wellbeing/welfare/rights of very many humans
Perhaps similar arguments can be made for the industrial and digital revolutions
Even AGI Omelas is not an instance of AGI going well
“AGI going well” necessarily leaves many humans the stated preference to help animals (e.g. abolishing animal exploitation and solving wild animal suffering), and it certainly gives us the means and opportunity to do so
I happen to think that AGI going well for humans is unlikely, even by the lights of someone who is more upside-focused
We’re on track for creating something that is more intelligent than us (better at understanding the world and achieving goals within it) – and probably something with awareness, autonomy, agency, and the capacity for recursive self-improvement and self-replication – without understanding how it works, how to make it do what we want, or what it is we even want it to do
So, between normative and empirical claims, I believe a world in which AGI goes well for humans is a very small fraction of the possibility space
And when I try to think about what this AGI-going-well-for-humans world looks like, mostly I don’t really know, but it seems likely that in this world:
We retain and develop our moral wisdom (the most fundamental tenet of which is plausibly “non-maleficence and compassion towards all sentient beings”)
And we have the means to enact this moral wisdom
So, we abolish animal exploitation and solve wild animal suffering
Thus, AGI goes well for animals as well as humans!
My position statement
As a suffering-focused ethicist who generally rejects moral aggregation across individuals (I am most sympathetic to painism), I have a higher bar for “AGI going well for humans” for humans than many others do; it’s not clear to me that previous technological advances went well for humans
Agricultural revolution’s “luxury trap”: going from hunting-gathering to farming allowed humans to consolidate unprecedented wealth and power, but at the cost of the wellbeing/welfare/rights of very many humans
Perhaps similar arguments can be made for the industrial and digital revolutions
Even AGI Omelas is not an instance of AGI going well
“AGI going well” necessarily leaves many humans the stated preference to help animals (e.g. abolishing animal exploitation and solving wild animal suffering), and it certainly gives us the means and opportunity to do so
I happen to think that AGI going well for humans is unlikely, even by the lights of someone who is more upside-focused
We’re on track for creating something that is more intelligent than us (better at understanding the world and achieving goals within it) – and probably something with awareness, autonomy, agency, and the capacity for recursive self-improvement and self-replication – without understanding how it works, how to make it do what we want, or what it is we even want it to do
So, between normative and empirical claims, I believe a world in which AGI goes well for humans is a very small fraction of the possibility space
And when I try to think about what this AGI-going-well-for-humans world looks like, mostly I don’t really know, but it seems likely that in this world:
We retain and develop our moral wisdom (the most fundamental tenet of which is plausibly “non-maleficence and compassion towards all sentient beings”)
And we have the means to enact this moral wisdom
So, we abolish animal exploitation and solve wild animal suffering
Thus, AGI goes well for animals as well as humans!