‘AI alignment’ isn’t about whether a narrow, reactive, non-agentic AI system (such as a current LLM) seems ‘helpful’.
It’s about whether an agentic AI that can make its own decision and take its own autonomous actions will make decisions that are aligned with general human values and goals.
‘AI alignment’ isn’t about whether a narrow, reactive, non-agentic AI system (such as a current LLM) seems ‘helpful’.
It’s about whether an agentic AI that can make its own decision and take its own autonomous actions will make decisions that are aligned with general human values and goals.