Foom is not a requirement for AI-risk worries. If it was, I would be even less worried, because in my opinion ai-go-foom is extremely unlikely. Correct me if I’m wrong, but I was under the impression that plenty of Ai x-riskers were not foomers?
I was inexact—by “post-foom” I simply meant after a capabilities takeoff occurs, regardless of whether than takes months, years, or even decades—as long as humanity doesn’t manage to notice and successfully stop ASI from being deployed.
Foom is not a requirement for AI-risk worries. If it was, I would be even less worried, because in my opinion ai-go-foom is extremely unlikely. Correct me if I’m wrong, but I was under the impression that plenty of Ai x-riskers were not foomers?
I think even the foom skeptics (e.g. Christiano) think that a foom will eventually happen, even if there is a slow-takeoff over many years first.
I was inexact—by “post-foom” I simply meant after a capabilities takeoff occurs, regardless of whether than takes months, years, or even decades—as long as humanity doesn’t manage to notice and successfully stop ASI from being deployed.