I heard an amazing comment in the live chat of the ‘Will AI be an Existential Risk: An Intro to AI Safety Risk Arguments’ talk. (note this is from memory so not an actual quote)
“I feel that Human-AI alignment is no different than Human-Human Alignment, we need to get better at Human-Agent Alignment as a whole” (note this is from memory so not an actual quote)
I found this a profound statement, I wonder if the bucket of Human-AI Alignment was to expand to include the most tractable topics/causes in Human-Agent Alignment, what might emerge that could have otherwise been missed.
I heard an amazing comment in the live chat of the ‘Will AI be an Existential Risk: An Intro to AI Safety Risk Arguments’ talk. (note this is from memory so not an actual quote)
I found this a profound statement, I wonder if the bucket of Human-AI Alignment was to expand to include the most tractable topics/causes in Human-Agent Alignment, what might emerge that could have otherwise been missed.