We’ve considered wrapping it into the problem framework in the past, but it can easily get confusing. Informativeness is also more of a feature of how you go about working on the cause, rather than which cause you’re focused on.
The current way we show that we think VOI is important is by listing Global Priorities Research as a top area (though I agree that doesn’t quite capture it). I also talk about it often when discussing how to coordinate with the EA community (VOI is a bigger factor when considering the community perspective than individual perspective).
The ‘Neglectedness’ criteria gets you a pretty big tilt in favour of working on underexplored problems already. But value of information is an important factor in choosing what project to work on within a problem area.
I think I agree with this—it’s usually the case that one particular sub-problem in an area is particularly informative to work on.
However, I think it’s at least possible that some areas are systematically very informative to work on. For example, if the primary work is research, then you should expect to mainly be outputting information. AI research might be like this.
Yes! Probably when we think of Importance, Neglectedness, and Tractability, we should also consider informativeness!
We’ve considered wrapping it into the problem framework in the past, but it can easily get confusing. Informativeness is also more of a feature of how you go about working on the cause, rather than which cause you’re focused on.
The current way we show that we think VOI is important is by listing Global Priorities Research as a top area (though I agree that doesn’t quite capture it). I also talk about it often when discussing how to coordinate with the EA community (VOI is a bigger factor when considering the community perspective than individual perspective).
The ‘Neglectedness’ criteria gets you a pretty big tilt in favour of working on underexplored problems already. But value of information is an important factor in choosing what project to work on within a problem area.
I think I agree with this—it’s usually the case that one particular sub-problem in an area is particularly informative to work on.
However, I think it’s at least possible that some areas are systematically very informative to work on. For example, if the primary work is research, then you should expect to mainly be outputting information. AI research might be like this.