Nice article Michael. Improvements to EA cause prioritization frameworks can be quite beneficial and I’d like to see more articles like this.
One thing I focus on when trying to make ITC more practical is ways to reduce its complexity even further. I do this by looking for which factors intuitively seem to have wider ranges in practice. Impact can vary by factors of millions or trillions, from harmful to helpful, from negative billions to positive billions. Tractability can vary by factors of millions, from negative millionths to positive digits. The Crowdedness component generally implies diminishing or increasing marginal returns only vary by factors of thousands, from negative tens to positive thousands.
Let’s assume interventions have randomly associated with them samples from probability distributions over these ranges. Roughly speaking then we should care about these factors based on the degree to which they help us clearly see which intervention is better than another.
The extent to which these let us distinguish between the value interventions is based on our uncertainty per factor for each intervention and how the value depends on each factor. Because the value is equal to Importance*Tractability*CrowdednessAdjustmentFactor each factor is treated the same (there is abstract symmetry). Thus we only need to consider how big each factor range is in terms of our typical intervention factor uncertainty. This then tells us how useful each factor is at distinguishing interventions based on importance.
Pulling numbers out the the intuitive hat for the typical intervention uncertainty I get:
Dividing the ranges into these units lets us measure the distinguishing power of each factor:
Importance normalized range (distinguishing units): 10^8
Tractability normalized range (distinguishing units): 10^6
Crowdedness adjustment factor normalized range (distinguishing units): 10^3
As a rule of thumb then it looks like focusing on Importance is better than Tractability is better than Crowdedness. This lends itself to a sequence of improving heuristics for comparing the value of interventions then:
Importance only
Importance and Tractability
The full ITC framework
(The above analysis is only approximately correct and will depend on details like the precise probability distribution over interventions you’re comparing and your uncertainty distributions over interventions for each factor.
The ITC framework can be further extended in several ways like: making precise curves interventions on the factors of ITC, extending the detail of the analysis of resources to other possible bottlenecks like time and people, incorporating the ideas of comparative advantage and market places, …. I hope someone does this!)
(PS I’m thinking of making this into a short post and enjoy writing collaborations so if someone is interested send me an EA forum message.)
I’m in favor of reducing the complexity of the framework, but I’m not sure if this is the right way to do it. In particular, estimating “importance only” or “importance and tractability only” isn’t helpful, because all three factors are necessary for calculating MU/$. A cause that scores high on I and T could be low MU/$ overall, due to being highly crowded. Or is your argument that the variance (across causes) in crowdedness is negligible, and therefore we don’t need to account for diminishing returns in practice?
My argument is about the later; the variances decrease in size from I to T to C. The unit analysis still works because the other parts are still implicitly there but treated as constants when dropped from the framework.
I guess I’m expecting diminishing returns to be an important factor in practice, so I wouldn’t place much weight on an analysis that excludes crowdedness.
Nice article Michael. Improvements to EA cause prioritization frameworks can be quite beneficial and I’d like to see more articles like this.
One thing I focus on when trying to make ITC more practical is ways to reduce its complexity even further. I do this by looking for which factors intuitively seem to have wider ranges in practice. Impact can vary by factors of millions or trillions, from harmful to helpful, from negative billions to positive billions. Tractability can vary by factors of millions, from negative millionths to positive digits. The Crowdedness component generally implies diminishing or increasing marginal returns only vary by factors of thousands, from negative tens to positive thousands.
In summary the ranges are intuitively roughly:
Importance (util/%progress): (-10^9, 10^9)
Tractability (%progress/$): (-10^-6, 1)
Crowdedness adjustment factor ($/$in): (-10, 10^3)
Let’s assume interventions have randomly associated with them samples from probability distributions over these ranges. Roughly speaking then we should care about these factors based on the degree to which they help us clearly see which intervention is better than another.
The extent to which these let us distinguish between the value interventions is based on our uncertainty per factor for each intervention and how the value depends on each factor. Because the value is equal to Importance*Tractability*CrowdednessAdjustmentFactor each factor is treated the same (there is abstract symmetry). Thus we only need to consider how big each factor range is in terms of our typical intervention factor uncertainty. This then tells us how useful each factor is at distinguishing interventions based on importance.
Pulling numbers out the the intuitive hat for the typical intervention uncertainty I get:
Importance (util/%progress uncertainty unit): 10
Tractability (%progress/$ uncertainty unit): 10^-6
Crowdedness adjustment factor ($/$in uncertainty unit): 1
Dividing the ranges into these units lets us measure the distinguishing power of each factor:
Importance normalized range (distinguishing units): 10^8
Tractability normalized range (distinguishing units): 10^6
Crowdedness adjustment factor normalized range (distinguishing units): 10^3
As a rule of thumb then it looks like focusing on Importance is better than Tractability is better than Crowdedness. This lends itself to a sequence of improving heuristics for comparing the value of interventions then:
Importance only
Importance and Tractability
The full ITC framework
(The above analysis is only approximately correct and will depend on details like the precise probability distribution over interventions you’re comparing and your uncertainty distributions over interventions for each factor.
The ITC framework can be further extended in several ways like: making precise curves interventions on the factors of ITC, extending the detail of the analysis of resources to other possible bottlenecks like time and people, incorporating the ideas of comparative advantage and market places, …. I hope someone does this!)
(PS I’m thinking of making this into a short post and enjoy writing collaborations so if someone is interested send me an EA forum message.)
Hi Justin, thanks for the comment.
I’m in favor of reducing the complexity of the framework, but I’m not sure if this is the right way to do it. In particular, estimating “importance only” or “importance and tractability only” isn’t helpful, because all three factors are necessary for calculating MU/$. A cause that scores high on I and T could be low MU/$ overall, due to being highly crowded. Or is your argument that the variance (across causes) in crowdedness is negligible, and therefore we don’t need to account for diminishing returns in practice?
My argument is about the later; the variances decrease in size from I to T to C. The unit analysis still works because the other parts are still implicitly there but treated as constants when dropped from the framework.
I guess I’m expecting diminishing returns to be an important factor in practice, so I wouldn’t place much weight on an analysis that excludes crowdedness.