I mean all of the above. I don’t want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Musk’s willingness to directly impose his personal values, not just current harms.
Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek aren’t. So let’s say just Open AI.
I would be very surprised if this doesn’t split discussion at least 60⁄40.
I mean all of the above. I don’t want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Musk’s willingness to directly impose his personal values, not just current harms.
Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek aren’t. So let’s say just Open AI.
I would be very surprised if this doesn’t split discussion at least 60⁄40.