Thanks for your response!
Yet, I am still not clearly convinced that my reading doesn’t make sense. Here are some comments:
“respondents were very uncertain”
This seems to be, at the same time, the reason why you could want to diversify your portfolio of interventions for reducing X-risks. And the reason why someone could want to improve such estimates (of P(Nth scenario|X-risk)). But it doesn’t seem to be a strong reason to discard the conclusion of the survey (It would be, if we had more reliable information elsewhere).“there’s overlap between the scenarios”:
I am unsure, but it seems that the overlaps are not that big overall. Especially, the overlap between {1,2,3} and {4,5} doesn’t seem huge. (I also wonder if these overlaps also illustrate that you could reduce X-risks using a broader range of interventions (than just “AI alignment” and “AI governance”))
The “Superintelligence” scenario (Bostrom, 2014)
Part 2 of “What failure looks like” (Christiano, 2019)
Part 1 of “What failure looks like” (Christiano, 2019)
War (Dafoe, 2018)
Misuse (Karnofsky, 2016)
Other existential catastrophe scenarios.
“no 1-1 mapping between “fields” and risk scenarios”
Sure, this would benefit from having a more precise model.“Priority comparison of interventions is better than high-level comparisons”
Right. High-level comparisons are so much cheaper to do, that it seems worth it to stay at that level for now.
The point I am especially curious about is the following:
- Is this survey pointing to the fact that the importance of working on “Technical AI alignment”, “AI governance”, “Cooperative AI” and “Misuse limitation” are all within one OOM?
By importance here I mean, the importance as in the ITN framework of 80k, not the overall priority, which should include neglectedness, tractabilities and looking at object-level interventions.
Interesting and nice to read!
Do you think the following is right?
The larger the Upside-focused Colonist Curse, the fewer resources agents caring about suffering will control overall and the smaller the risks of conflicts causing S-risks?
This may balance out the effect that the larger the Upside-focused Colonist Curse, the more neglected S-risks are.
High Upside-focused Colonist Curse produces fewer S-risks at the same time as making them more neglected.