Thanks for writing this! It really resonated with me despite the fact that I only have a software engineering background and not much ML experience. I’m still struggling to form my views as well for a lot of the reasons you mentioned and one of my biggest sources of uncertainty has been trying to figure out what people with AI/ML expertise think about AI safety. This post has been very helpful in that regard (in addition to other information that I’ve been ingesting to help resolve this uncertainty). The issue of AGI timelines has come to be a major crux for me when it comes to considering how seriously to take AI risk. The closer that AGI seems the more concern is warranted since even a low probability of AGI going rogue would result in a high negative EV. It seems reasonable to me to think that AGI is possible within the next 20-30 years with a 20 − 40% probability and by default I’d think there would be at least a 10% probability of AGI going rogue without any efforts of alignment. With these kind of probabilities it seems still worth taking AI risk seriously even though I still feel very unsure of how things will play out. I expect to make a big update by the end of this decade though based on the type of algorithmic breakthroughs made in the next few years.