Great to see this! I’m very sympathetic to the value of mental health as a cause area, so wonderful to see this written up, thank you.
One suggestion (it may be too late for this write-up, but may be useful for future reference): other cause write-ups (e.g. by 80k or FP) have given numerical scores to each of Impact, Neglectedness, and Tractability, and I think this would have been good to see here too.
Doing this for MH and other causes would have better conveyed the nuances in your thinking. For example, you make the case for mental health being neglected, but presumably you think that other things (e.g. x-risk?) are more neglected. And you make the case for mental health being tractable, but presumably you think that other things (sending cash to the poor?) are more tractable. A table of scores would have helped you sound balanced, while still supporting your overall conclusion.
Hello Sanjay. I didn’t do this because I think the idea of comparing causes by numerically assigned scores to I, N and T is of illusory helpfulness and I wish we would all stop doing it(!). What we care about is knowing the expected value of the dollar you would donate (or, more complicatedly, the hour you would spend). I’ve produced some numbers by doing cost-effectiveness estimates of a charity you could donate to. Given that’s what we ultimately want, it’s unclear what the positive value is of representing things via the INT approach. I have a thesis chapter/EA forum post forthcoming on this topic, but I’ll make a couple of points here.
First, note that on the 80k framework the INT literally is a cost-effectiveness calculation and not, which is what Will uses in Doing Good Better, 3 independent heuristics which somehow combine to give a rough idea of cost effectiveness. Indeed, it’s more confusing to do expected value the way 80k suggests, than how I did it, as their method requires redundant and arbitrary steps. 80k specify neglectedness as “% increase in resources/extra person or dollar”. It is later defined as “How many people, or dollars, are currently being dedicated to solving the problem?” But, deciding what counts as dollars being dedicated to “solving the problem” is arbitrary, hence there cannot be a precise answer to this question.
Further, if I wanted to put mental health in 80k’s framework, note that in addition to establishing an arbitrary neglectedness score, I’d have to ascertain solvability—found by asking “If we doubled direct effort on this problem, by what fraction of the remaining problem would we expect to solve?” How would I do that? I’d have to work out the total size of the problem, then assess how much of it would be solved by some given intervention. To do that, I’d need to work out the cost-effectiveness of a mental health intervention. But I’ve already done that, so I can only calculate the tractability/solvability number once I already have the information that is ultimately of interest to me.
I don’t see how it’s an improvement over the formula cost-effectiveness = effect/cost to say to say Cost-effectiveness = (effect/ % of a problem solved)/(-% of a problem solved / %increase in resources)/(% increase in resources /cost). As demonstrated, it’s (at least sometimes) harder to calculate cost-effectiveness this latter way. If we really think scale is important to keep in mind, we could have a two-factor model, scale (value of solving whole problem) and solvability* (% of problem solved/cost).
Second, I don’t see what the point is to take one ranking of scale/neglectedness/tractability for each of two causes and compare those. What does it tell us that X is more neglected/tractable/large that Y, if that is all we know about X and Y? By itself, it literally tells us nothing about the expected value of marginal resources to X vs Y. We only understand that once we’ve thought how scale, neglectedness and tractability combine to give us cost-effectiveness. To bring this out, imagine you and I are having a conversation.
Sanjay: “mental health is more neglected than poverty”.
Michael: “and? That doesn’t tell me which one has higher expected value”.
S: “hmm. Poverty is bigger”.
M: “again? So what? That doesn’t tell me which one has higher expected value either”.
S: “Okay, well, poverty is more tractable than mental health”.
M: “and? So what? In fact, what do you mean by ‘tractable’? if you mean ‘has higher expected value’, then you’re just saying poverty is better than mental health health and I don’t know how you factored in neglectedness and size when assessing tractability. If by tractability you mean ‘if we doubled direct effort on this problem by, what fraction of the remaining problem would we expect to solve?’ then I only know which cause you think has higher expected value when you give me precise scores of scale, neglectedness and tractability and tell me how you’re combining those scores to give expected value”
S: Michael, why are you always so difficult? [curtain falls]
By analogy, if we want to know the speed of some object (speed = distance/time), knowing just the distance its traveled, or just the time it took, gives us absolutely no insight into its speed. Do objects which travel further tend to travel faster? Always travel faster?
Third, I don’t think it even makes sense to talk about comparing causes as opposed to comparing interventions. What we’re really doing when we do cause prioritisation is saying “there are problems of types A, B and C. I’m going to find the best intervention I can that tackles each of A, B and C. Then I’m going to compare the best item I’ve found in each ‘bucket’.” Given we can’t give money to poverty (the abstract noun), but we can give to interventions that reduce poverty, we should just think in terms of interventions instead of causes.
Some good points here! On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness. There ITN framework therefore loses its force.
This being said, when we don’t know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on.
Generally, I think cost-effectiveness is often what people actually use to choose between causes. e.g. I choose far future over global health because of broad cost-effectiveness estimates in my head, not because of the ITN
On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness
Are you using the two ‘neglectedness’ words differently? Why would you calculate X if you already knew X in general?
This being said, when we don’t know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on
I think that’s right. One method is to use scale and/or neglectedness as (weak), independent heuristics for cost-effectiveness if you haven’t or can’t calculate cost-effectiveness. It’s unclear how to use tractability as a heuristic without implicitly factoring in information about neglectedness or scale. Another (the other?) method, then, is to directly assess cost-effectiveness. Once you’d done that, you’ve incorporated the ITN stuff and it would be double-counting to appeal to them again (“I know X is more cost-effective than Y, but Y is more neglected” etc).
Great to see this! I’m very sympathetic to the value of mental health as a cause area, so wonderful to see this written up, thank you.
One suggestion (it may be too late for this write-up, but may be useful for future reference): other cause write-ups (e.g. by 80k or FP) have given numerical scores to each of Impact, Neglectedness, and Tractability, and I think this would have been good to see here too.
Doing this for MH and other causes would have better conveyed the nuances in your thinking. For example, you make the case for mental health being neglected, but presumably you think that other things (e.g. x-risk?) are more neglected. And you make the case for mental health being tractable, but presumably you think that other things (sending cash to the poor?) are more tractable. A table of scores would have helped you sound balanced, while still supporting your overall conclusion.
Hello Sanjay. I didn’t do this because I think the idea of comparing causes by numerically assigned scores to I, N and T is of illusory helpfulness and I wish we would all stop doing it(!). What we care about is knowing the expected value of the dollar you would donate (or, more complicatedly, the hour you would spend). I’ve produced some numbers by doing cost-effectiveness estimates of a charity you could donate to. Given that’s what we ultimately want, it’s unclear what the positive value is of representing things via the INT approach. I have a thesis chapter/EA forum post forthcoming on this topic, but I’ll make a couple of points here.
First, note that on the 80k framework the INT literally is a cost-effectiveness calculation and not, which is what Will uses in Doing Good Better, 3 independent heuristics which somehow combine to give a rough idea of cost effectiveness. Indeed, it’s more confusing to do expected value the way 80k suggests, than how I did it, as their method requires redundant and arbitrary steps. 80k specify neglectedness as “% increase in resources/extra person or dollar”. It is later defined as “How many people, or dollars, are currently being dedicated to solving the problem?” But, deciding what counts as dollars being dedicated to “solving the problem” is arbitrary, hence there cannot be a precise answer to this question.
Further, if I wanted to put mental health in 80k’s framework, note that in addition to establishing an arbitrary neglectedness score, I’d have to ascertain solvability—found by asking “If we doubled direct effort on this problem, by what fraction of the remaining problem would we expect to solve?” How would I do that? I’d have to work out the total size of the problem, then assess how much of it would be solved by some given intervention. To do that, I’d need to work out the cost-effectiveness of a mental health intervention. But I’ve already done that, so I can only calculate the tractability/solvability number once I already have the information that is ultimately of interest to me.
I don’t see how it’s an improvement over the formula cost-effectiveness = effect/cost to say to say Cost-effectiveness = (effect/ % of a problem solved)/(-% of a problem solved / %increase in resources)/(% increase in resources /cost). As demonstrated, it’s (at least sometimes) harder to calculate cost-effectiveness this latter way. If we really think scale is important to keep in mind, we could have a two-factor model, scale (value of solving whole problem) and solvability* (% of problem solved/cost).
Second, I don’t see what the point is to take one ranking of scale/neglectedness/tractability for each of two causes and compare those. What does it tell us that X is more neglected/tractable/large that Y, if that is all we know about X and Y? By itself, it literally tells us nothing about the expected value of marginal resources to X vs Y. We only understand that once we’ve thought how scale, neglectedness and tractability combine to give us cost-effectiveness. To bring this out, imagine you and I are having a conversation.
Sanjay: “mental health is more neglected than poverty”.
Michael: “and? That doesn’t tell me which one has higher expected value”.
S: “hmm. Poverty is bigger”.
M: “again? So what? That doesn’t tell me which one has higher expected value either”.
S: “Okay, well, poverty is more tractable than mental health”.
M: “and? So what? In fact, what do you mean by ‘tractable’? if you mean ‘has higher expected value’, then you’re just saying poverty is better than mental health health and I don’t know how you factored in neglectedness and size when assessing tractability. If by tractability you mean ‘if we doubled direct effort on this problem by, what fraction of the remaining problem would we expect to solve?’ then I only know which cause you think has higher expected value when you give me precise scores of scale, neglectedness and tractability and tell me how you’re combining those scores to give expected value”
S: Michael, why are you always so difficult? [curtain falls]
By analogy, if we want to know the speed of some object (speed = distance/time), knowing just the distance its traveled, or just the time it took, gives us absolutely no insight into its speed. Do objects which travel further tend to travel faster? Always travel faster?
Third, I don’t think it even makes sense to talk about comparing causes as opposed to comparing interventions. What we’re really doing when we do cause prioritisation is saying “there are problems of types A, B and C. I’m going to find the best intervention I can that tackles each of A, B and C. Then I’m going to compare the best item I’ve found in each ‘bucket’.” Given we can’t give money to poverty (the abstract noun), but we can give to interventions that reduce poverty, we should just think in terms of interventions instead of causes.
Some good points here! On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness. There ITN framework therefore loses its force.
This being said, when we don’t know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on.
Generally, I think cost-effectiveness is often what people actually use to choose between causes. e.g. I choose far future over global health because of broad cost-effectiveness estimates in my head, not because of the ITN
Are you using the two ‘neglectedness’ words differently? Why would you calculate X if you already knew X in general?
I think that’s right. One method is to use scale and/or neglectedness as (weak), independent heuristics for cost-effectiveness if you haven’t or can’t calculate cost-effectiveness. It’s unclear how to use tractability as a heuristic without implicitly factoring in information about neglectedness or scale. Another (the other?) method, then, is to directly assess cost-effectiveness. Once you’d done that, you’ve incorporated the ITN stuff and it would be double-counting to appeal to them again (“I know X is more cost-effective than Y, but Y is more neglected” etc).
I’m not sure I follow your first point