Giving unaligned AI systems access to your neural state is bad™, and
“Merging” with AI systems is under-defined.
I’d love to see an actual explanation for how brain-computer interfaces would be useful for alignment.
Additionally, I object to “AI alignment is difficult because AI models would struggle to understand human values”. Under my best understanding, AI alignment is about making cognition aimable at all.
I am personally not convinced of their usefulness, Robert Long has an alternative take here.
The fundamental problem, as I see it, is that
Giving unaligned AI systems access to your neural state is bad™, and
“Merging” with AI systems is under-defined.
I’d love to see an actual explanation for how brain-computer interfaces would be useful for alignment.
Additionally, I object to “AI alignment is difficult because AI models would struggle to understand human values”. Under my best understanding, AI alignment is about making cognition aimable at all.