Alignment seems hard. Humans value very complex things, which it seems both A) difficult to tell an AI to preserve and B) seem unlikely for AI to preserve by default.
A number of things seem to follow pretty directly from the idea of ‘creating an agent which is much more intelligent than humans’:
Non-human goals: we have a strong prior that its goals will not line up with human goals (See orthogonality thesis)
Optimising is Destructive: optimising for one value system will by default destroy value according to other value systems (see: instrumental convergence)
Intelligence is Dangerous: as it’s much smarter than humans, predicting its behaviour will be very difficult, as will containing or controlling it. (See AI boxing)
When you combine these things, you get an expectation that the default outcome of unaligned AGI is very bad for humans—and an idea of why AI alignment may be difficult.
To take a different approach:
Humans have a pretty bad track record of not using massively destructive technology. It seems at least plausible that COVID-19 was a lab leak (and its plausibility is enough for this argument). The other key example to me is the nuclear bomb.
What’s important is that both of these technologies are relatively difficult to get access to. At least right now, it’s relatively easy to get access to state-of-the-art AI.
Why is this important? It’s related to the unilateralist’s curse. If we think that AI has the potential to be very harmful (which deserves its own debate), then the more people that have access to it, the more likely that harm becomes. Given our track record with lower-access technologies, it seems likely from this frame that accelerationism will lead to non-general artificial intelligence being used to do massive harm by humans.
Epistemic status: just a 5-minute collation of some useful sources, with a little explanatory text off the top of my head.
Stampy’s answers to “Why is AI dangerous?”and “Why might we expect a superintelligence to be hostile by default?” seem pretty good to me.
To elaborate a little:
Alignment seems hard. Humans value very complex things, which it seems both A) difficult to tell an AI to preserve and B) seem unlikely for AI to preserve by default.
A number of things seem to follow pretty directly from the idea of ‘creating an agent which is much more intelligent than humans’:
Non-human goals: we have a strong prior that its goals will not line up with human goals (See orthogonality thesis)
Optimising is Destructive: optimising for one value system will by default destroy value according to other value systems (see: instrumental convergence)
Intelligence is Dangerous: as it’s much smarter than humans, predicting its behaviour will be very difficult, as will containing or controlling it. (See AI boxing)
When you combine these things, you get an expectation that the default outcome of unaligned AGI is very bad for humans—and an idea of why AI alignment may be difficult.
To take a different approach:
Humans have a pretty bad track record of not using massively destructive technology. It seems at least plausible that COVID-19 was a lab leak (and its plausibility is enough for this argument). The other key example to me is the nuclear bomb.
What’s important is that both of these technologies are relatively difficult to get access to. At least right now, it’s relatively easy to get access to state-of-the-art AI.
Why is this important? It’s related to the unilateralist’s curse. If we think that AI has the potential to be very harmful (which deserves its own debate), then the more people that have access to it, the more likely that harm becomes. Given our track record with lower-access technologies, it seems likely from this frame that accelerationism will lead to non-general artificial intelligence being used to do massive harm by humans.