[Question] AI risks: the most convincing argument

I remember the time when I wasn’t yet fully convinced that risks from misaligned AI pose serious threats to humanity. I also remember thinking about the argument that finally convinced me this must be the case. The argument is simple; superintelligent agents can be deceitful and interpretability is difficult. As a result, you might assume you know what to expect from your superintelligent model when, well, you don’t.

So, I’m asking everyone: what is the argument that convinced you about the importance of the alignment problem? Gathering these answers will help me reflect on the best ways to communicate AI safety with people that don’t take alignment seriously or have no background in the area. Thanks in advance!