This is because many actions that would be helpful under one theory of how things will play out would be harmful under another (for example, see my discussion of the “caution” frame vs. the “competition” frame).
It seems to me that in order to more productively take actions (including making more grants), we need to get more clarity on some crucial questions such as “How serious is the threat of a world run by misaligned AI?” But it’s hard to answer questions like this, when we’re talking about a development (transformative AI) that may take place some indeterminate number of decades from now.
I agree and have been saying as much for a while. I think the nearcasting approach is interesting and worth exploring, but given how complicated reality is (given the number of potentially relevant variables, uncertainty about those variables values and interactions, inability to just compute insights via empirical research and big data crunching), I still think people should consider that maybe what we need is just a lot of rigorous theoretical legwork, just like how it’s often beneficial/necessary to use rigorous empirical methods (e.g., statistical hypothesis testing, data-based high-fidelity engineering simulations[1]) for STEM research rather than just intuiting conclusions based on mental models.
Unless there is some other methodological innovation in theoretical research that makes it as legible and reproducible as the empirical methods we’ve come to rely on over the past century, I think that this kind of theoretical research should at least demand the kind of (1) assumption explication, (2) argument/rebuttal tracking, and (3) viewpoint/argument generation and aggregation that I described in my research project post-mortem.
But I’d be curious to hear other people’s thoughts!
I agree and have been saying as much for a while. I think the nearcasting approach is interesting and worth exploring, but given how complicated reality is (given the number of potentially relevant variables, uncertainty about those variables values and interactions, inability to just compute insights via empirical research and big data crunching), I still think people should consider that maybe what we need is just a lot of rigorous theoretical legwork, just like how it’s often beneficial/necessary to use rigorous empirical methods (e.g., statistical hypothesis testing, data-based high-fidelity engineering simulations[1]) for STEM research rather than just intuiting conclusions based on mental models.
Unless there is some other methodological innovation in theoretical research that makes it as legible and reproducible as the empirical methods we’ve come to rely on over the past century, I think that this kind of theoretical research should at least demand the kind of (1) assumption explication, (2) argument/rebuttal tracking, and (3) viewpoint/argument generation and aggregation that I described in my research project post-mortem.
But I’d be curious to hear other people’s thoughts!
The status of this as an empirical method is certainly disputable, but I think it’s not worth quibbling over semantics here/yet.