Partly the ranking is based on an overall judgement call. We list some of the main inputs into it here.
That said, I think for the ‘ratings in a nutshell’ section, you need to look at the more quantiative version.
Here’s the summary for AI:
Scale: We think work on positively shaping AI has the potential for a very large positive impact, because the risks AI poses are so serious. We estimate that the risk of a severe, even existential catastrophe caused by machine intelligence within the next 100 years is something like 10%.
Neglectedness: The problem of potential damage from AI is somewhat neglected, though it is getting more attention with time. Funding seems to be on the order of 100 million per year. This includes work on both technical and policy approaches to shaping the long-run influence of AI by dedicated organisations and teams.
Solvability: Making progress on positively shaping the development of artificial intelligence seems moderately tractable, though we’re highly uncertain. We expect that doubling the effort on this issue would reduce the most serious risks by around 1%.
Here’s the summary for factory farming:
Scale: We think work to reduce the suffering of present and future nonhuman animals has the potential for a large positive impact. We estimate that ending factory farming would increase the expected value of the future by between 0.01% and 0.1%.
Neglectedness: This issue is moderately neglected. Current spending is between $10 million and $100 million per year.
Solvability: Making progress on reducing the suffering of present and future nonhuman animals seems moderately tractable. There are some plausible ways to make progress, though these likely require technological and expert support.
You can see that we rate them similarly for neglectedness and solvability, but think the scale of AI alignment is 100-1000x larger. This is mainly due to the potential of AI to contribute to existential risk, or to other very long-term effects.
Hey OmariZi,
Partly the ranking is based on an overall judgement call. We list some of the main inputs into it here.
That said, I think for the ‘ratings in a nutshell’ section, you need to look at the more quantiative version.
Here’s the summary for AI:
Here’s the summary for factory farming:
You can see that we rate them similarly for neglectedness and solvability, but think the scale of AI alignment is 100-1000x larger. This is mainly due to the potential of AI to contribute to existential risk, or to other very long-term effects.
Thanks Ben, that helps a lot!