a) I don’t think “very high certainty” interventions exist for xrisk, no. But I think there exists interventions where people can produce relatively robust estimates if given enough time, in the sense that further armchair thinking and near-term empirical feedback are unlikely to affect the numbers by more than say 0.5 orders of magnitude.
And when that happens, the uncertainty in the moral debate of “how much funding per unit x-risk reduction is moral?” would get overshadowed by the uncertainty in the more practical debate of “how much x-risk reduction does this intervention provide?”
I think you’re misunderstanding this question. I am not asking for how much funding per unit x-risk reduction is moral in the abstract, I’m asking to get a sense of what the current margin of funding looks like, as a way to help researchers and others prioritize our efforts.
Now in theory with perfect probabilistic calibration, assessment and coordination, EA should just fund the marginally most cost-effective thing to do until we are out of money. But in practice we just have a lot of uncertainty, etc. Researchers often have a sense (not necessarily very good!) of how cost-effective a few of the projects they are investigating are, and maybe a larger number of other projects, but may not have a deep sense of at what margin funders are sufficiently excited to fund (I know I at least didn’t have a good idea before working through this question! And I’m still somewhat confused).
If we have a sense of what the margin/price point looks like (or even rough order of magnitude estimates), then it’s easier to be actively excited to do research or incubate new projects much below that price point, to deprioritize those research at much above that price point, and work hard on figuring out more accurate pricing for projects around that price point.
a) I don’t think “very high certainty” interventions exist for xrisk, no. But I think there exists interventions where people can produce relatively robust estimates if given enough time, in the sense that further armchair thinking and near-term empirical feedback are unlikely to affect the numbers by more than say 0.5 orders of magnitude.
I think you’re misunderstanding this question. I am not asking for how much funding per unit x-risk reduction is moral in the abstract, I’m asking to get a sense of what the current margin of funding looks like, as a way to help researchers and others prioritize our efforts.
Now in theory with perfect probabilistic calibration, assessment and coordination, EA should just fund the marginally most cost-effective thing to do until we are out of money. But in practice we just have a lot of uncertainty, etc. Researchers often have a sense (not necessarily very good!) of how cost-effective a few of the projects they are investigating are, and maybe a larger number of other projects, but may not have a deep sense of at what margin funders are sufficiently excited to fund (I know I at least didn’t have a good idea before working through this question! And I’m still somewhat confused).
If we have a sense of what the margin/price point looks like (or even rough order of magnitude estimates), then it’s easier to be actively excited to do research or incubate new projects much below that price point, to deprioritize those research at much above that price point, and work hard on figuring out more accurate pricing for projects around that price point.