I just want to register a meta-level disagreement with this post which is your recommendations seem like really bad epistemics. I don’t think we should just heuristics and information cascade ourselves to death as a community but actually create good gears level understandings of forecasting AI progress.
You cite that AI accelerationist arguments act as soldiers but you literally are deploying arguments as soldiers in this post!
You recommend terrible weird gossiping anti-agency mechanisms instead of pro-agency actions like work on safety, upskill, and field build.
You make a lot of arguments in negation that feel like weird sleight of hands. For instance, you say “We don’t know of any major AI lab that has participated in slowing down AGI development, or publicly expressed interest in it” but OpenAI’s charter literally has the assist clause (regardless of whether or not you believe it’s a promise they will hold it exists).
To be clear I think there are good arguments for short timelines (median 5-10) but you don’t actually make them here[1]. What you do instead is:
Say you can express technical disagreement but not say any empirical examples/obstacles because that’s infohazardous.
A lot of the heuristics based arguments can’t even be verified or prodded because they are “private conversations” which is I guess fine but then what do you want people to do with that?
I think people should think for themselves and engage with the arguments and models people provide for timelines and threat models but this post doesn’t do that. It just directionally vibes a high p(doom) with a short timelines and tells people to panic and gossip.
I just want to register a meta-level disagreement with this post which is your recommendations seem like really bad epistemics. I don’t think we should just heuristics and information cascade ourselves to death as a community but actually create good gears level understandings of forecasting AI progress.
You cite that AI accelerationist arguments act as soldiers but you literally are deploying arguments as soldiers in this post!
You recommend terrible weird gossiping anti-agency mechanisms instead of pro-agency actions like work on safety, upskill, and field build.
You make a lot of arguments in negation that feel like weird sleight of hands. For instance, you say “We don’t know of any major AI lab that has participated in slowing down AGI development, or publicly expressed interest in it” but OpenAI’s charter literally has the assist clause (regardless of whether or not you believe it’s a promise they will hold it exists).
To be clear I think there are good arguments for short timelines (median 5-10) but you don’t actually make them here[1]. What you do instead is:
Say you can express technical disagreement but not say any empirical examples/obstacles because that’s infohazardous.
A lot of the heuristics based arguments can’t even be verified or prodded because they are “private conversations” which is I guess fine but then what do you want people to do with that?
I think people should think for themselves and engage with the arguments and models people provide for timelines and threat models but this post doesn’t do that. It just directionally vibes a high p(doom) with a short timelines and tells people to panic and gossip.
For instance: https://www.lesswrong.com/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute