[edit: confused about the downvotes this has got instead of the disagree-votes]
I like that EA/AI Safety has come round to recognising that the development of AGI is inherently political, and I think posts like this are a good part of this trend.
I also like that this post is written in a clear, non-jargony way. Not every forum post has to be super technical!
A bit concerned that spreading knowledge of the Town and Country Planning Act (1947) might be a social infohazard :P
The second half of the article seems to be like it’s edging towards the narcissism of small differences. It seems that this is more about how to frame messaging or what specific policy choice is right. It’s at least in the ‘mistake theory’ bucket of politics, but I wouldn’t be surprised if some PauseAI advocates (or perhaps other anti-Pause groups) are beginning to think that AI might become more of a ‘conflict theory’ zone
Main disagreement is around “The AI pause folks could learn from this approach.” I really think the field of AI Safety/AI Governance has a lot to a learn from the AI pause folks. For example, Holly Elmore is putting skin in the game, and honestly acting more credibly from my point of view than someone like Dario Amodei. People at Frontier labs might have been keeping their xrisk estimates quiet a few years ago, but I don’t like the fact that we know that Sam and Dario both have non trivial estimates of doom (in relatively short timeframes I’d wager) and didn’t mention this to the US Senate under Oath. The very simple “if you think it really could kill everyone, don’t build it” is going to steamroll a lot of arguments from xRisk concerned labs imho.
JWS—I appreciate your point that when Sam Altman and Dario Amodei gave testimony under oath to the US Senate, and they failed to honestly reveal their estimates of the likelihood that AGI/ASI could kill everyone, that was arguably one of the most egregious acts of perjury (by omission of crucial information) in US history.
It’s a major reason why I simply don’t trust them on the AI safety issue.
[edit: confused about the downvotes this has got instead of the disagree-votes]
I like that EA/AI Safety has come round to recognising that the development of AGI is inherently political, and I think posts like this are a good part of this trend.
I also like that this post is written in a clear, non-jargony way. Not every forum post has to be super technical!
A bit concerned that spreading knowledge of the Town and Country Planning Act (1947) might be a social infohazard :P
The second half of the article seems to be like it’s edging towards the narcissism of small differences. It seems that this is more about how to frame messaging or what specific policy choice is right. It’s at least in the ‘mistake theory’ bucket of politics, but I wouldn’t be surprised if some PauseAI advocates (or perhaps other anti-Pause groups) are beginning to think that AI might become more of a ‘conflict theory’ zone
Main disagreement is around “The AI pause folks could learn from this approach.” I really think the field of AI Safety/AI Governance has a lot to a learn from the AI pause folks. For example, Holly Elmore is putting skin in the game, and honestly acting more credibly from my point of view than someone like Dario Amodei. People at Frontier labs might have been keeping their xrisk estimates quiet a few years ago, but I don’t like the fact that we know that Sam and Dario both have non trivial estimates of doom (in relatively short timeframes I’d wager) and didn’t mention this to the US Senate under Oath. The very simple “if you think it really could kill everyone, don’t build it” is going to steamroll a lot of arguments from xRisk concerned labs imho.
JWS—I appreciate your point that when Sam Altman and Dario Amodei gave testimony under oath to the US Senate, and they failed to honestly reveal their estimates of the likelihood that AGI/ASI could kill everyone, that was arguably one of the most egregious acts of perjury (by omission of crucial information) in US history.
It’s a major reason why I simply don’t trust them on the AI safety issue.
I you might have been right in the 1950s, but by now I think the cat is firmly out of the bag on this one.