Resolving a crucial consideration increases the value of all your future research massively. Take for example the question of whether that will be a hard or slow take off. Hard take off favours AI safety now, whereas soft take off favours building political and social institutions that encourage cooperation and avoid wars. As they both have humanity’s future on the line they are both equally massively important, conditioned on them being the scenario that might happen.
Resolving the question (or at least driving down the uncertainty) would allow the whole community to focus on the right scenario and get a lot better bang for their buck. Even if it doesn’t directly address the problem.
I think an important one is: How likely is the project to reduce the uncertainty of the return?
E.g. will it decide a crucial consideration
Edit to give more detail:
Resolving a crucial consideration increases the value of all your future research massively. Take for example the question of whether that will be a hard or slow take off. Hard take off favours AI safety now, whereas soft take off favours building political and social institutions that encourage cooperation and avoid wars. As they both have humanity’s future on the line they are both equally massively important, conditioned on them being the scenario that might happen.
Resolving the question (or at least driving down the uncertainty) would allow the whole community to focus on the right scenario and get a lot better bang for their buck. Even if it doesn’t directly address the problem.