Thanks for writing this! As others have commented, I thought the focus on your actual cruxes and uncertainties, rather than just trying to lay out a clean or convincing argument, was really great. Iād be excited to see more talks/āwrite-ups of a similar style from other people working on AI safety or other causes.
I think that long-term, itās not acceptable to have there be people who have the ability to kill everyone. It so happens that so far no one has been able to kill everyone. This seems good. I think long-term weāre either going to have to fix the problem where some portion of humans want to kill everyone or fix the problem where humans are able to kill everyone.
This, and the section itās a part of, reminded me quite a bit of Nick Bostromās Vulnerable World Hypothesis paper (and specifically his āeasy nukesā thought experiment). From that paperās abstract:
Scientific and technological progress might change peopleās capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the āsemi-anarchic default conditionā. [...] A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance.
Iād recommend that paper for people who found that section of this post interesting.
Thanks for writing this! As others have commented, I thought the focus on your actual cruxes and uncertainties, rather than just trying to lay out a clean or convincing argument, was really great. Iād be excited to see more talks/āwrite-ups of a similar style from other people working on AI safety or other causes.
This, and the section itās a part of, reminded me quite a bit of Nick Bostromās Vulnerable World Hypothesis paper (and specifically his āeasy nukesā thought experiment). From that paperās abstract:
Iād recommend that paper for people who found that section of this post interesting.