Some good points here! On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness. There ITN framework therefore loses its force.
This being said, when we don’t know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on.
Generally, I think cost-effectiveness is often what people actually use to choose between causes. e.g. I choose far future over global health because of broad cost-effectiveness estimates in my head, not because of the ITN
On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness
Are you using the two ‘neglectedness’ words differently? Why would you calculate X if you already knew X in general?
This being said, when we don’t know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on
I think that’s right. One method is to use scale and/or neglectedness as (weak), independent heuristics for cost-effectiveness if you haven’t or can’t calculate cost-effectiveness. It’s unclear how to use tractability as a heuristic without implicitly factoring in information about neglectedness or scale. Another (the other?) method, then, is to directly assess cost-effectiveness. Once you’d done that, you’ve incorporated the ITN stuff and it would be double-counting to appeal to them again (“I know X is more cost-effective than Y, but Y is more neglected” etc).
Some good points here! On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness. There ITN framework therefore loses its force.
This being said, when we don’t know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on.
Generally, I think cost-effectiveness is often what people actually use to choose between causes. e.g. I choose far future over global health because of broad cost-effectiveness estimates in my head, not because of the ITN
Are you using the two ‘neglectedness’ words differently? Why would you calculate X if you already knew X in general?
I think that’s right. One method is to use scale and/or neglectedness as (weak), independent heuristics for cost-effectiveness if you haven’t or can’t calculate cost-effectiveness. It’s unclear how to use tractability as a heuristic without implicitly factoring in information about neglectedness or scale. Another (the other?) method, then, is to directly assess cost-effectiveness. Once you’d done that, you’ve incorporated the ITN stuff and it would be double-counting to appeal to them again (“I know X is more cost-effective than Y, but Y is more neglected” etc).
I’m not sure I follow your first point