Seeking explanations of comparative rankings in 80k priorities list
Hi there, I’m looking for an explanation of why some problems on the 80k list are ranked as more pressing than others.
I understand that it’s supposedly based on lower ITN (importance/tractability/neglectedness) values for some of the problems. However, from looking at a few examples, some of the problems listed as less pressing seem to have comparable scores in those categories to problems listed as more pressing.
For example, if we compare AI (top priority), nuclear security (second-highest priority) and factory farming (lower priority), the ratings in a nutshell (according to the problem profile summaries) are:
AI: very large impact / somewhat neglected / moderately tractable
Nuclear: large impact / not very neglected / somewhat-to-moderately tractable
Factory farming: large impact / moderately neglected / moderately tractable
Those ratings don’t seem to line up with the overall rating of how pressing those problems are relative to each other, so it seems like there must be additional reasons for 80k’s differential rating of them. And it doesn’t seem to be explained clearly on the overall list or on the individual profiles.
My best guess (at present) is that there is a “long-term value” judgement going on too, so that, for example, factory farming is judged as lower priority since it’s not clear that decreasing the suffering of animals in the short-term will have other positive flow-through effects compared to other issues. This is alluded to in the FF profile. But still, it’s not clear if it’s one of the decisive reasons for it being less recommended than other possibilities.
Can anyone help clear this up for me? Thanks in advance!
Hey OmariZi,
Partly the ranking is based on an overall judgement call. We list some of the main inputs into it here.
That said, I think for the ‘ratings in a nutshell’ section, you need to look at the more quantiative version.
Here’s the summary for AI:
Here’s the summary for factory farming:
You can see that we rate them similarly for neglectedness and solvability, but think the scale of AI alignment is 100-1000x larger. This is mainly due to the potential of AI to contribute to existential risk, or to other very long-term effects.
Thanks Ben, that helps a lot!