I’m going to go against the grain here, and explain how I truly feel about this sort of AI safety messaging.
As others have pointed out, fearmongering on this scale is absolutely insane to those who don’t have a high probability of doom. Worse, Elizier is calling for literal nuclear strikes and great power war to stop a threat that isn’t even provably real! Most AI researchers do not share his views, neither do I.
I want to publicly state that pushing this maximized narrative about AI x risk will lead to terrorist actions against GPU clusters or individuals involved in AI. These types of acts follow from the intense beliefs of those who agree with Elizier, and have a doomsday cult style of thought.
Not only will that sort of behavior discredit AI safety and potentially EA entirely, it could hand the future to other actors or cause governments to lock down AI for themselves, making outcomes far worse.
Worse, Elizier is calling for literal nuclear strikes
He’s calling for a policy that would be backed by whatever level of response was necessary to enforce it, including, if it escalated to that level, military response (plausibly including nuclear). This is different from, right now, literally calling for nuclear strikes. The distinction may be somewhat subtle, but I think it’s important to keep this distinction in mind during this discussion.
I want to publicly state that pushing this maximized narrative about AI x risk will lead to terrorist actions against GPU clusters or individuals involved in AI
This statement strikes me as overconfident. While the narrative presumably does at least somewhat increase the personal security concerns of individuals involved in AI, I think we need to be able to have serious discussions on the topic, and public policy shouldn’t be held hostage to worries that discussions about problems will somewhat increase the security concerns of those involved in those problems (e.g., certain leftist discourse presumably somewhat increases the personal security concerns of rich people, but I don’t think that fact is a good argument against leftism or in favor of silencing leftists).
including, if it escalated to that level, military response (plausibly including nuclear).
I don’t see where Eliezer has said “plausibly including nuclear”. The point of mentioning nuclear was to highlight the scale of the risk on Eliezer’s model (‘this is bad enough that even a nuclear confrontation would be preferable’), not to predict nuclear confrontation.
You’re right. I wasn’t trying to say that Eliezer explicitly said that the response should plausibly include nuclear use – I was saying that he was saying that force should be used if needed, and it was plausible to me that he was imagining that in certain circumstances the level of force needed may be nuclear (hardened data centers?). But he has more recently explicitly stated that he was not imagining any response would include nuclear use, so I hereby retract that part of my statement.
I’m going to go against the grain here, and explain how I truly feel about this sort of AI safety messaging.
As others have pointed out, fearmongering on this scale is absolutely insane to those who don’t have a high probability of doom. Worse, Elizier is calling for literal nuclear strikes and great power war to stop a threat that isn’t even provably real! Most AI researchers do not share his views, neither do I.
I want to publicly state that pushing this maximized narrative about AI x risk will lead to terrorist actions against GPU clusters or individuals involved in AI. These types of acts follow from the intense beliefs of those who agree with Elizier, and have a doomsday cult style of thought.
Not only will that sort of behavior discredit AI safety and potentially EA entirely, it could hand the future to other actors or cause governments to lock down AI for themselves, making outcomes far worse.
He’s calling for a policy that would be backed by whatever level of response was necessary to enforce it, including, if it escalated to that level, military response (plausibly including nuclear). This is different from, right now, literally calling for nuclear strikes. The distinction may be somewhat subtle, but I think it’s important to keep this distinction in mind during this discussion.
This statement strikes me as overconfident. While the narrative presumably does at least somewhat increase the personal security concerns of individuals involved in AI, I think we need to be able to have serious discussions on the topic, and public policy shouldn’t be held hostage to worries that discussions about problems will somewhat increase the security concerns of those involved in those problems (e.g., certain leftist discourse presumably somewhat increases the personal security concerns of rich people, but I don’t think that fact is a good argument against leftism or in favor of silencing leftists).
I don’t see where Eliezer has said “plausibly including nuclear”. The point of mentioning nuclear was to highlight the scale of the risk on Eliezer’s model (‘this is bad enough that even a nuclear confrontation would be preferable’), not to predict nuclear confrontation.
You’re right. I wasn’t trying to say that Eliezer explicitly said that the response should plausibly include nuclear use – I was saying that he was saying that force should be used if needed, and it was plausible to me that he was imagining that in certain circumstances the level of force needed may be nuclear (hardened data centers?). But he has more recently explicitly stated that he was not imagining any response would include nuclear use, so I hereby retract that part of my statement.