I am confused by this survey. Taken at face value, working on improving Cooperation would only be x2 less impactful than working on hard AI alignment (only looking at the importance of the problem). And working on partial/naive alignment would be as impactful as working on AI alignment (looking only at the importance). Does that make sense?
(I make a bunch of assumptions to come up with these values. The starting point is the likelihood of the 5-6 X-risks scenarios. Then I associate each scenario with a field (AI alignment, naive AI alignment, Cooperation) that reduces its likelihood. Then I produce the value above, and they stay similar even if I assume a 2-step model where some scenarios happen before others. Google sheet)
I doubt that it’s reasonable to draw these kinds of implications from the survey results, for a few reasons:
respondents were very uncertain
there’s overlap between the scenarios
there’s no 1-1 mapping between “fields” and risk scenarios (e.g. I’d strongly bet that improved cooperation of certain kinds would make both catastrophic misalignment and war less likely) (though maybe your model tries to account for this, I didn’t look at it)
A broader point: I think making importance comparisons (between interventions) on the level of abstraction of “improving cooperation”, “hard AI alignment” and “partial/naive alignment” doesn’t make much sense. I expect comparing specific plans/interventions to be much more useful.
Yet, I am still not clearly convinced that my reading doesn’t make sense. Here are some comments:
“respondents were very uncertain” This seems to be, at the same time, the reason why you could want to diversify your portfolio of interventions for reducing X-risks. And the reason why someone could want to improve such estimates (of P(Nth scenario|X-risk)). But it doesn’t seem to be a strong reason to discard the conclusion of the survey (It would be, if we had more reliable information elsewhere).
“there’s overlap between the scenarios”: I am unsure, but it seems that the overlaps are not that big overall. Especially, the overlap between {1,2,3} and {4,5} doesn’t seem huge. (I also wonder if these overlaps also illustrate that you could reduce X-risks using a broader range of interventions (than just “AI alignment” and “AI governance”))
The “Superintelligence” scenario (Bostrom, 2014)
Part 2 of “What failure looks like” (Christiano, 2019)
Part 1 of “What failure looks like” (Christiano, 2019)
War (Dafoe, 2018)
Misuse (Karnofsky, 2016)
Other existential catastrophe scenarios.
“no 1-1 mapping between “fields” and risk scenarios” Sure, this would benefit from having a more precise model.
“Priority comparison of interventions is better than high-level comparisons” Right. High-level comparisons are so much cheaper to do, that it seems worth it to stay at that level for now.
The point I am especially curious about is the following: - Is this survey pointing to the fact that the importance of working on “Technical AI alignment”, “AI governance”, “Cooperative AI” and “Misuse limitation” are all within one OOM? By importance here I mean, the importance as in the ITN framework of 80k, not the overall priority, which should include neglectedness, tractabilities and looking at object-level interventions.
I am confused by this survey. Taken at face value, working on improving Cooperation would only be x2 less impactful than working on hard AI alignment (only looking at the importance of the problem). And working on partial/naive alignment would be as impactful as working on AI alignment (looking only at the importance).
Does that make sense?
(I make a bunch of assumptions to come up with these values. The starting point is the likelihood of the 5-6 X-risks scenarios. Then I associate each scenario with a field (AI alignment, naive AI alignment, Cooperation) that reduces its likelihood. Then I produce the value above, and they stay similar even if I assume a 2-step model where some scenarios happen before others. Google sheet)
Thanks for your comment!
I doubt that it’s reasonable to draw these kinds of implications from the survey results, for a few reasons:
respondents were very uncertain
there’s overlap between the scenarios
there’s no 1-1 mapping between “fields” and risk scenarios (e.g. I’d strongly bet that improved cooperation of certain kinds would make both catastrophic misalignment and war less likely) (though maybe your model tries to account for this, I didn’t look at it)
A broader point: I think making importance comparisons (between interventions) on the level of abstraction of “improving cooperation”, “hard AI alignment” and “partial/naive alignment” doesn’t make much sense. I expect comparing specific plans/interventions to be much more useful.
Thanks for your response!
Yet, I am still not clearly convinced that my reading doesn’t make sense. Here are some comments:
“respondents were very uncertain”
This seems to be, at the same time, the reason why you could want to diversify your portfolio of interventions for reducing X-risks. And the reason why someone could want to improve such estimates (of P(Nth scenario|X-risk)). But it doesn’t seem to be a strong reason to discard the conclusion of the survey (It would be, if we had more reliable information elsewhere).
“there’s overlap between the scenarios”:
I am unsure, but it seems that the overlaps are not that big overall. Especially, the overlap between {1,2,3} and {4,5} doesn’t seem huge. (I also wonder if these overlaps also illustrate that you could reduce X-risks using a broader range of interventions (than just “AI alignment” and “AI governance”))
The “Superintelligence” scenario (Bostrom, 2014)
Part 2 of “What failure looks like” (Christiano, 2019)
Part 1 of “What failure looks like” (Christiano, 2019)
War (Dafoe, 2018)
Misuse (Karnofsky, 2016)
Other existential catastrophe scenarios.
“no 1-1 mapping between “fields” and risk scenarios”
Sure, this would benefit from having a more precise model.
“Priority comparison of interventions is better than high-level comparisons”
Right. High-level comparisons are so much cheaper to do, that it seems worth it to stay at that level for now.
The point I am especially curious about is the following:
- Is this survey pointing to the fact that the importance of working on “Technical AI alignment”, “AI governance”, “Cooperative AI” and “Misuse limitation” are all within one OOM?
By importance here I mean, the importance as in the ITN framework of 80k, not the overall priority, which should include neglectedness, tractabilities and looking at object-level interventions.
I dislike the usage of “strongly bet” here, given that a literal bet here seems hard to arrive at. See here: <https://nunosempere.com/blog/2023/03/02/metaphorical-bets> for some background.
Thanks for this, I won’t use “bet” in this context in the future