Similarly to Owen’s comment, I also think that AI and nuclear interact in important ways (various pathways to destabilisation that do not necessarily depend on AGI). It seems that many (most?) pathways from AI risk to extinction lead via other GCRs eg pandemic, nuclear war, great power war, global infrastructure failure, catastrophic food production failure, etc. So I’d suggest quite a bit more hedging with focus on these risks, rather than putting all resources into ‘solving AI’ in case that fails and we need to deal with these other risks.
I think essentially all (not just many) pathways from AI risk will have to flow through other more concrete pathways. AI is a general purpose technology, so I feel like directly comparing AI risk with other lower level pathways of risk, as 80 k seems to be doing somewhat when they describe the scale of their problems, is a little confusing. To be fair, 80 k tries to account for this talking about the indirect risk of specific risks, which they often set to 10 times the direct risk, but these adjustments seem very arbitrary to me.
In general, one can get higher risk estimates by describing risk at a higher level. So the existential risk from LLMs is smaller than the risk from AI, which is smaller than the risk from computers, which is smaller than the risk from e.g. subatomic particles. However, this should only update one towards e.g. prioritise “computer risk” over “LLM risk” to the extent the ratio between the cost-effectiveness of “computer risk interventions” and “LLM risk interventions” is proportional to the ratio between the scale of “computer risk” and “LLM risk”, which is quite unclear given the ambiguity and vagueness of the 4 terms involved[1].
To get more clarity, I believe it is be better to prioritise at a lower level, assessing the cost-effectiveness of specific classes of interventions, as Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Centre for Exploratory Altruism Research (CEARCH), and GiveWell do.
Similarly to Owen’s comment, I also think that AI and nuclear interact in important ways (various pathways to destabilisation that do not necessarily depend on AGI). It seems that many (most?) pathways from AI risk to extinction lead via other GCRs eg pandemic, nuclear war, great power war, global infrastructure failure, catastrophic food production failure, etc. So I’d suggest quite a bit more hedging with focus on these risks, rather than putting all resources into ‘solving AI’ in case that fails and we need to deal with these other risks.
Great points, Matt.
I think essentially all (not just many) pathways from AI risk will have to flow through other more concrete pathways. AI is a general purpose technology, so I feel like directly comparing AI risk with other lower level pathways of risk, as 80 k seems to be doing somewhat when they describe the scale of their problems, is a little confusing. To be fair, 80 k tries to account for this talking about the indirect risk of specific risks, which they often set to 10 times the direct risk, but these adjustments seem very arbitrary to me.
In general, one can get higher risk estimates by describing risk at a higher level. So the existential risk from LLMs is smaller than the risk from AI, which is smaller than the risk from computers, which is smaller than the risk from e.g. subatomic particles. However, this should only update one towards e.g. prioritise “computer risk” over “LLM risk” to the extent the ratio between the cost-effectiveness of “computer risk interventions” and “LLM risk interventions” is proportional to the ratio between the scale of “computer risk” and “LLM risk”, which is quite unclear given the ambiguity and vagueness of the 4 terms involved[1].
To get more clarity, I believe it is be better to prioritise at a lower level, assessing the cost-effectiveness of specific classes of interventions, as Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Centre for Exploratory Altruism Research (CEARCH), and GiveWell do.
“Computer risk”, “LLM risk”, “computer risk interventions” and “LLM risk interventions”.