By āgreater threat to AI safetyā you mean itās a bigger culprit in terms of amount of x-risk caused, right? As opposed to being a threat to AI safety itself, by e.g. trying to get safety researchers removed from the industry/āgovernment (like this).
I mean all of the above. I donāt want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Muskās willingness to directly impose his personal values, not just current harms.
Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek arenāt. So letās say just Open AI.
I would be very surprised if this doesnāt split discussion at least 60ā40.
By āgreater threat to AI safetyā you mean itās a bigger culprit in terms of amount of x-risk caused, right? As opposed to being a threat to AI safety itself, by e.g. trying to get safety researchers removed from the industry/āgovernment (like this).
I mean all of the above. I donāt want to restrict it to one typology of harm, just anything affecting the long-term future via AI. Which includes not just X-risk, but value-lock in, s-risks and multi-agent scenarios as well. And making extrapolations from Muskās willingness to directly impose his personal values, not just current harms.
Side note: there is no particular reason to complicate it by including both Open AI and Deep Mind, they just seemed like good comparisons in a way Nvidia and Deepseek arenāt. So letās say just Open AI.
I would be very surprised if this doesnāt split discussion at least 60ā40.