I think essentially all (not just many) pathways from AI risk will have to flow through other more concrete pathways. AI is a general purpose technology, so I feel like directly comparing AI risk with other lower level pathways of risk, as 80 k seems to be doing somewhat when they describe the scale of their problems, is a little confusing. To be fair, 80 k tries to account for this talking about the indirect risk of specific risks, which they often set to 10 times the direct risk, but these adjustments seem very arbitrary to me.
In general, one can get higher risk estimates by describing risk at a higher level. So the existential risk from LLMs is smaller than the risk from AI, which is smaller than the risk from computers, which is smaller than the risk from e.g. subatomic particles. However, this should only update one towards e.g. prioritise ācomputer riskā over āLLM riskā to the extent the ratio between the cost-effectiveness of ācomputer risk interventionsā and āLLM risk interventionsā is proportional to the ratio between the scale of ācomputer riskā and āLLM riskā, which is quite unclear given the ambiguity and vagueness of the 4 terms involved[1].
To get more clarity, I believe it is be better to prioritise at a lower level, assessing the cost-effectiveness of specific classes of interventions, as Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Centre for Exploratory Altruism Research (CEARCH), and GiveWell do.
Great points, Matt.
I think essentially all (not just many) pathways from AI risk will have to flow through other more concrete pathways. AI is a general purpose technology, so I feel like directly comparing AI risk with other lower level pathways of risk, as 80 k seems to be doing somewhat when they describe the scale of their problems, is a little confusing. To be fair, 80 k tries to account for this talking about the indirect risk of specific risks, which they often set to 10 times the direct risk, but these adjustments seem very arbitrary to me.
In general, one can get higher risk estimates by describing risk at a higher level. So the existential risk from LLMs is smaller than the risk from AI, which is smaller than the risk from computers, which is smaller than the risk from e.g. subatomic particles. However, this should only update one towards e.g. prioritise ācomputer riskā over āLLM riskā to the extent the ratio between the cost-effectiveness of ācomputer risk interventionsā and āLLM risk interventionsā is proportional to the ratio between the scale of ācomputer riskā and āLLM riskā, which is quite unclear given the ambiguity and vagueness of the 4 terms involved[1].
To get more clarity, I believe it is be better to prioritise at a lower level, assessing the cost-effectiveness of specific classes of interventions, as Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Centre for Exploratory Altruism Research (CEARCH), and GiveWell do.
āComputer riskā, āLLM riskā, ācomputer risk interventionsā and āLLM risk interventionsā.