List of Lethalities is for people already familiar with the space. I don’t think it’s a very good introduction. #2 on the list literally dives straight into a detailed description of how AI kills everyone with rocket nanobots (it’s to prove a specific point about disparity between human intelligence and optimal AGI intelligence and shouldn’t be taken out of context, but it still is what it is).
List of Lethalities fit neatly into the situation of AI safety at the time (June 2022) and using it to introduce AI concepts to people is too far outside what it was optimized for.
Eliezer Yudkowsky’s 2008 paper AI as a pos neg factor in global risk is an oxford publication and has more than 600 citations. Even if it’s a bit long and several of its sections are severely out of date, it is still a really really good introduction, and it’s very impressive that someone was able to write something that predictive of the future in 2008.
Mostly agree on List of Lethalities, but I do think it’s an excellent intro for particular types of people. I included it after hearing Shoshannah Tekofsky say it was her first serious encounter with AI risk arguments on this podcast with Akash Wasil.
List of Lethalities is for people already familiar with the space. I don’t think it’s a very good introduction. #2 on the list literally dives straight into a detailed description of how AI kills everyone with rocket nanobots (it’s to prove a specific point about disparity between human intelligence and optimal AGI intelligence and shouldn’t be taken out of context, but it still is what it is).
List of Lethalities fit neatly into the situation of AI safety at the time (June 2022) and using it to introduce AI concepts to people is too far outside what it was optimized for.
Eliezer Yudkowsky’s 2008 paper AI as a pos neg factor in global risk is an oxford publication and has more than 600 citations. Even if it’s a bit long and several of its sections are severely out of date, it is still a really really good introduction, and it’s very impressive that someone was able to write something that predictive of the future in 2008.
Mostly agree on List of Lethalities, but I do think it’s an excellent intro for particular types of people. I included it after hearing Shoshannah Tekofsky say it was her first serious encounter with AI risk arguments on this podcast with Akash Wasil.