When people distinguish between alignment and capabilities, I think they’re often interested in the question of what research is good vs. bad for humanity. Alignment vs. capabilities seems insufficient to answer that more important question. Here’s my attempt at a better distinction:
There are many different risks from AI. Research can reduce some risks while exacerbating others. “Safety” and “capabilities” are therefore incorrectly reductive. Research should be assessed by its distinct impacts on many different risks and benefits. If a research direction is better for humanity than most other research directions, then perhaps we should award it the high-status title of “safety research.”
Scalable oversight is a great example. It provides more accurate feedback to AI systems, reducing the risk that AIs will pursue objectives that conflict with human goals because their feedback has been inaccurate. But it also makes AI systems more commercially viable, shortening timelines and perhaps hastening the onset of other risks, such as misuse, arms races, or deceptive alignment. The cost-benefit calculation is quite complicated.
“Alignment” can be a red herring in these discussions, as misalignment is far from the only way that AI can lead to catastrophe or extinction.
When people distinguish between alignment and capabilities, I think they’re often interested in the question of what research is good vs. bad for humanity. Alignment vs. capabilities seems insufficient to answer that more important question. Here’s my attempt at a better distinction:
There are many different risks from AI. Research can reduce some risks while exacerbating others. “Safety” and “capabilities” are therefore incorrectly reductive. Research should be assessed by its distinct impacts on many different risks and benefits. If a research direction is better for humanity than most other research directions, then perhaps we should award it the high-status title of “safety research.”
Scalable oversight is a great example. It provides more accurate feedback to AI systems, reducing the risk that AIs will pursue objectives that conflict with human goals because their feedback has been inaccurate. But it also makes AI systems more commercially viable, shortening timelines and perhaps hastening the onset of other risks, such as misuse, arms races, or deceptive alignment. The cost-benefit calculation is quite complicated.
“Alignment” can be a red herring in these discussions, as misalignment is far from the only way that AI can lead to catastrophe or extinction.
Related: https://www.lesswrong.com/posts/zswuToWK6zpYSwmCn/some-background-for-reasoning-about-dual-use-alignment