I agree with much of Leopold’s empirical claims, timelines, and analysis. I’m acting on it myself in my planning as something like a mainline scenario.
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this * i will give you a view into this small elite group who are the only who are situationally aware * the inner circle longed tsmc way before you * if you believe me; you can get 100x richer—there’s still alpha, you can still be early * This geopolitical outcome is “inevitable” (sic!) * in the future the coolest and most elite group will work on The Project. “see you in the desert” (sic) * Etc.
Combined with a lot of retweets, with praise, on launch day, that were clearly coordinated behind the scenes; it gives me the feeling of being deliberately written to meme a narrative into existence via self-fulfilling prophecy; rather than inferring a forecast via analysis.
As a sidenote, this felt to me like an indication of how different the AI safety adjacent community is now to when I joined it about a decade ago. In the early days of this space, I expect a piece like this would have been something like “epistemically cancelled”: fairly strongly decried as violating important norms around reasoning and cooperation. I actually expect that had someone written this publicly in 2016, they would’ve plausibly been uninvited as a speaker to any EAGs in 2017.
I don’t particularly want to debate whether these epistemic boundaries were correct—I’d just like to claim that, empirically, I think they de facto would have been enforced. Though, if others who have been around have a different impression of how this would’ve played out, I’d be curious to hear.
I agree with you about the bad argumentation tactics of Situational Awareness, but not about the object level. That is, I think Leopold’s arguments are both bad, and false. I’d be interested in talking more about why they’re false, and I’m also curious about why you think they’re true.
I think some were false. For example, I don’t get the stuff about mini-drones undermining nuclear deterrence, as size will constrain your batteries enough that you won’t be able to do much of anything useful. Maybe I’m missing something (modulo nanotech).
I think it’s very plausible scaling holds up, it’s plausible AGI becomes a natsec matter, it’s plausible it will affect nuclear deterrence (via other means), for example.
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like: * a small circle of the smartest people believe this * i will give you a view into this small elite group who are the only who are situationally aware * the inner circle longed tsmc way before you * if you believe me; you can get 100x richer—there’s still alpha, you can still be early * This geopolitical outcome is “inevitable” (sic!) * in the future the coolest and most elite group will work on The Project. “see you in the desert” (sic) * Etc.
These are not just vibes—they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It’s not epistemically poor to say these things if they’re actually true.
I agree with much of Leopold’s empirical claims, timelines, and analysis. I’m acting on it myself in my planning as something like a mainline scenario.
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer—there’s still alpha, you can still be early
* This geopolitical outcome is “inevitable” (sic!)
* in the future the coolest and most elite group will work on The Project. “see you in the desert” (sic)
* Etc.
Combined with a lot of retweets, with praise, on launch day, that were clearly coordinated behind the scenes; it gives me the feeling of being deliberately written to meme a narrative into existence via self-fulfilling prophecy; rather than inferring a forecast via analysis.
As a sidenote, this felt to me like an indication of how different the AI safety adjacent community is now to when I joined it about a decade ago. In the early days of this space, I expect a piece like this would have been something like “epistemically cancelled”: fairly strongly decried as violating important norms around reasoning and cooperation. I actually expect that had someone written this publicly in 2016, they would’ve plausibly been uninvited as a speaker to any EAGs in 2017.
I don’t particularly want to debate whether these epistemic boundaries were correct—I’d just like to claim that, empirically, I think they de facto would have been enforced. Though, if others who have been around have a different impression of how this would’ve played out, I’d be curious to hear.
I agree with you about the bad argumentation tactics of Situational Awareness, but not about the object level. That is, I think Leopold’s arguments are both bad, and false. I’d be interested in talking more about why they’re false, and I’m also curious about why you think they’re true.
I think some were false. For example, I don’t get the stuff about mini-drones undermining nuclear deterrence, as size will constrain your batteries enough that you won’t be able to do much of anything useful. Maybe I’m missing something (modulo nanotech).
I think it’s very plausible scaling holds up, it’s plausible AGI becomes a natsec matter, it’s plausible it will affect nuclear deterrence (via other means), for example.
What do you disagree with?
These are not just vibes—they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It’s not epistemically poor to say these things if they’re actually true.
(instead of making all comments on both places, ill continue discussing over at lesswrong https://www.lesswrong.com/posts/i5pccofToYepythEw/against-aschenbrenner-how-situational-awareness-constructs-a#Hview8GCnX7w4XSre )