Seems pretty dependent on how seriously you take some combination of AI x-risk in general, the likelihood that the naïve scaling hypothesis holding (if it even holds at all), and what the trade-off between empirical/​theoretical work on AI Safety is no?
Seems pretty dependent on how seriously you take some combination of AI x-risk in general, the likelihood that the naïve scaling hypothesis holding (if it even holds at all), and what the trade-off between empirical/​theoretical work on AI Safety is no?