I don’t understand how the robustness argument works, I couldn’t steelman it.
If you want to assess the priority of an intervention by breaking down it’s priority Q into I, T & N:
if you multiply them together, you didn’t make your estimation more robust than using any other breakdown.
if you don’t, then you can’t say anything about the overall priority of the intervention.
What’s your strategy to have high robustness estimation of numerical quantities? How do you ground it? (And how is it that it works only when using the ITN breakdown of Q, and not any other breakdown?)
Multiplying them together would be the same it’s true. I was talking about keeping it disaggregated. In this view rather than a single priority Q we can have an “importance Q”, “tractability Q”, “neglectedness Q” and we compare interventions that way.
The desire to have a total ordering over interventions is understandable but I don’t know if it’s always good when changing one subjective probability estimate from 10^-5 to 10^-6 can jump your intervention from “fantastic deal” to “garbage deal”. By limiting the effect of any one criterion, the ITN framework is more stable to changing subjective estimates. Holden’s cluster thinking vs sequence thinking essay goes into that in more detail.
I feel like the best way to do that is to multiply things together.
And if you have error bars around I, T & N, then you can probably do something more precise, but still close in spirit to “multiply the three things together”
I don’t understand how the robustness argument works, I couldn’t steelman it.
If you want to assess the priority of an intervention by breaking down it’s priority Q into I, T & N:
if you multiply them together, you didn’t make your estimation more robust than using any other breakdown.
if you don’t, then you can’t say anything about the overall priority of the intervention.
What’s your strategy to have high robustness estimation of numerical quantities? How do you ground it? (And how is it that it works only when using the ITN breakdown of Q, and not any other breakdown?)
Multiplying them together would be the same it’s true. I was talking about keeping it disaggregated. In this view rather than a single priority Q we can have an “importance Q”, “tractability Q”, “neglectedness Q” and we compare interventions that way.
The desire to have a total ordering over interventions is understandable but I don’t know if it’s always good when changing one subjective probability estimate from 10^-5 to 10^-6 can jump your intervention from “fantastic deal” to “garbage deal”. By limiting the effect of any one criterion, the ITN framework is more stable to changing subjective estimates. Holden’s cluster thinking vs sequence thinking essay goes into that in more detail.
Other breakdowns would be fine as well.
How would you compare these two interventions:
1: I=10 T=1 N=1
2: I=1 T=2 N = 2
I feel like the best way to do that is to multiply things together.
And if you have error bars around I, T & N, then you can probably do something more precise, but still close in spirit to “multiply the three things together”