By calling out one kind of mistake, we don’t want to incline people toward making the opposite mistake. We are calling for more careful evaluations of projects, both within AI and outside of AI. But we acknowledge the risk of focusing on just one kind of mistake (and focusing on an extreme version of it, to boot). We didn’t pursue comprehensive analyses of which cause areas will remain important conditional on short timelines (and the analysis we did give was pretty speculative), but that would be a good future project. Very near future, of course, if short-ish timelines are correct!
By calling out one kind of mistake, we don’t want to incline people toward making the opposite mistake. We are calling for more careful evaluations of projects, both within AI and outside of AI. But we acknowledge the risk of focusing on just one kind of mistake (and focusing on an extreme version of it, to boot). We didn’t pursue comprehensive analyses of which cause areas will remain important conditional on short timelines (and the analysis we did give was pretty speculative), but that would be a good future project. Very near future, of course, if short-ish timelines are correct!