Thanks for your response, Alexa! I’d recommend reading anything by Eliezer Yudkowsky (the founder of the Machine Intelligence Research Institute and one of the world’s most well-known AI safety advocates), especially his open letter (linked here). This journal article by Joe Carlsmith (who is an EA and I believe a Forum participant as well) gives a more technical case for AI x-risk, and does it from a less pessimistic perspective than Yudkowsky.
Thanks for your response, Alexa! I’d recommend reading anything by Eliezer Yudkowsky (the founder of the Machine Intelligence Research Institute and one of the world’s most well-known AI safety advocates), especially his open letter (linked here). This journal article by Joe Carlsmith (who is an EA and I believe a Forum participant as well) gives a more technical case for AI x-risk, and does it from a less pessimistic perspective than Yudkowsky.