Although Fukuyama’s end is anything but, as there will come a point where democracy, free markets, and consumerism will collapse and sunder into AI-driven technocracy.
Democracy, human rights, free markets, and consumerism “won out” because they increased human productivity and standards of living, relative to rivaling systems. That doesn’t make them a destiny, but rather a step that is temporary like all things.
For the wealthy and for rulers or anyone with power, other humans were and are simultaneously assets and liabilities. But we are gradually entering an age where other humans will cease to become assets yet will remain liabilities. After all, you don’t need to provide health insurance or pay the healthcare costs of a robot. If humans are economically needed, then the best system for them is a free market democracy.
But what happens to the free market democracy when humans are no longer needed?
We will eventually arrive at an ugly new era, fully automized, where humanity becomes increasingly redundant, worthless, and obsolete. The utility and power (economic, military, and civil) an average person possesses will become close to naught. No one will “need” you, the human, and if you aren’t part of the affluent, you’ll be lucky if others altruistically wish to keep you alive…
We still hold our hope that the global elites will care about human rights, lives, and democracy and consumerism in the coming age where we are powerless compared to those who own the robots and all the humanless means of production. But perhaps it’s the inner cynic in me that says it’s highly unlikely.
Yet as altruistic folks, we strive to make sure the system that replaces the current one will be benevolent to most, if not all.
I believe the end-goal isn’t a world ruled by a benevolent global elite that owns all the robots. The goal isn’t to create a ‘techno-leviathan’ for people to ride. The goal is to find a benevolent God in mind design space. One we would be happy to give up sovereignty to. That’s I think what AI alignment is about.
Either way, I think we’re going to need some serious ‘first principles’ work at the intersection of AI alignment and political philosophy. “What is the nature of a just political and economic order when humans are economically useless and authority lies with a superhuman AI?” “What institution would even have the legitimacy to ask this question, let alone answer it?”
The plausibility of this depends on exactly what the culture of the elite is. (In general, I would be interested in knowing what all the different elite cultures in the world actually are.) I can imagine there being some tendency toward thinking of the poor / “low-merit”, as being superfluous, but I can also imagine superrich people not being that extremely elitist and thinking “why not? The world is big, let the undeserving live.” or even things which are more humane than that.
But also, despite whatever humaneness there might be in the elite, I can see there being Molochian pressures to discard humans. Can Moloch be stopped? (This seems like it would be a very important thing to accomplish, if tractable.) If we could solve international competition (competition between elite cultures who are in charge of things), then nations could choose to not have the most advanced economies they possibly could, and thus could have a more “pro-slack” mentality.
Maybe AGI will solve international competition? I think a relatively simple, safe alignment for an AGI , would be for one that was the servant of humans—but which ones? Each individual? Or the elites who currently represent them? If the elites, then it wouldn’t automatically stop Moloch. But otherwise it might.
(Or the AGI could respect the autonomy of humans and let them have whatever values they want, including international competition, which may plausibly be humanity’s “revealed preference”.)
Wonderfully written.
Although Fukuyama’s end is anything but, as there will come a point where democracy, free markets, and consumerism will collapse and sunder into AI-driven technocracy.
Democracy, human rights, free markets, and consumerism “won out” because they increased human productivity and standards of living, relative to rivaling systems. That doesn’t make them a destiny, but rather a step that is temporary like all things.
For the wealthy and for rulers or anyone with power, other humans were and are simultaneously assets and liabilities. But we are gradually entering an age where other humans will cease to become assets yet will remain liabilities. After all, you don’t need to provide health insurance or pay the healthcare costs of a robot. If humans are economically needed, then the best system for them is a free market democracy.
But what happens to the free market democracy when humans are no longer needed?
We will eventually arrive at an ugly new era, fully automized, where humanity becomes increasingly redundant, worthless, and obsolete. The utility and power (economic, military, and civil) an average person possesses will become close to naught. No one will “need” you, the human, and if you aren’t part of the affluent, you’ll be lucky if others altruistically wish to keep you alive…
We still hold our hope that the global elites will care about human rights, lives, and democracy and consumerism in the coming age where we are powerless compared to those who own the robots and all the humanless means of production. But perhaps it’s the inner cynic in me that says it’s highly unlikely.
Yet as altruistic folks, we strive to make sure the system that replaces the current one will be benevolent to most, if not all.
I believe the end-goal isn’t a world ruled by a benevolent global elite that owns all the robots. The goal isn’t to create a ‘techno-leviathan’ for people to ride. The goal is to find a benevolent God in mind design space. One we would be happy to give up sovereignty to. That’s I think what AI alignment is about.
(A related discussion on LW.)
Either way, I think we’re going to need some serious ‘first principles’ work at the intersection of AI alignment and political philosophy. “What is the nature of a just political and economic order when humans are economically useless and authority lies with a superhuman AI?” “What institution would even have the legitimacy to ask this question, let alone answer it?”
The plausibility of this depends on exactly what the culture of the elite is. (In general, I would be interested in knowing what all the different elite cultures in the world actually are.) I can imagine there being some tendency toward thinking of the poor / “low-merit”, as being superfluous, but I can also imagine superrich people not being that extremely elitist and thinking “why not? The world is big, let the undeserving live.” or even things which are more humane than that.
But also, despite whatever humaneness there might be in the elite, I can see there being Molochian pressures to discard humans. Can Moloch be stopped? (This seems like it would be a very important thing to accomplish, if tractable.) If we could solve international competition (competition between elite cultures who are in charge of things), then nations could choose to not have the most advanced economies they possibly could, and thus could have a more “pro-slack” mentality.
Maybe AGI will solve international competition? I think a relatively simple, safe alignment for an AGI , would be for one that was the servant of humans—but which ones? Each individual? Or the elites who currently represent them? If the elites, then it wouldn’t automatically stop Moloch. But otherwise it might.
(Or the AGI could respect the autonomy of humans and let them have whatever values they want, including international competition, which may plausibly be humanity’s “revealed preference”.)