I think most of the variance of estimates may come from the high variance in estimations of how big x-risk is. (Ok, a lot of the variance here comes from different people using different methods to estimate the answer to the question, but assuming people all would use one method, I expect a lot of variance coming from this.) Some people may say there is a 50% probability of x-risk this century, and some may say 2%, which causes the amount of money they would be willing to spend to be quite different. But because in both cases x-risk reduction is still (by far) the most effective thing you can do, it may make sense to ask how much you would pay for a 0.01% reduction of the current x-risk. (i.e. from 3% to 2.9997% x-risk.) I think this would cause more agreement, because it is probably easier to estimate for example how much some grant decreases AI risk in respect to the overall AI risk, than to expect how high the overall AI risk is. That way, however, the question might be slightly more confusing, and I do think we should also make progress on better estimating the overall probability of an existential catastrophe occurring in the next couple of centuries, but I think the question I suggest might still be the better way to estimate what we want to know.
I think most of the variance of estimates may come from the high variance in estimations of how big x-risk is. (Ok, a lot of the variance here comes from different people using different methods to estimate the answer to the question, but assuming people all would use one method, I expect a lot of variance coming from this.)
Some people may say there is a 50% probability of x-risk this century, and some may say 2%, which causes the amount of money they would be willing to spend to be quite different.
But because in both cases x-risk reduction is still (by far) the most effective thing you can do, it may make sense to ask how much you would pay for a 0.01% reduction of the current x-risk. (i.e. from 3% to 2.9997% x-risk.)
I think this would cause more agreement, because it is probably easier to estimate for example how much some grant decreases AI risk in respect to the overall AI risk, than to expect how high the overall AI risk is.
That way, however, the question might be slightly more confusing, and I do think we should also make progress on better estimating the overall probability of an existential catastrophe occurring in the next couple of centuries, but I think the question I suggest might still be the better way to estimate what we want to know.