Artificial intelligence alignment is believed by many to be one of the most important challenges we face right now. I understand the argument that once AGI is developed it’s game over unless we have solved alignment, and I am completely convinced by this. However, I have never seen anyone explain the reasoning that leads experts in the field to believe that AGI could be here in the near future. Claims that there is an X% chance of AGI in the next Y years (where X is fairly large and Y fairly small) are rarely supported by an actual argument.
I realize that for the EA community to dedicate so many resources to this topic there must be good reasons to believe that AGI really is not too far away or that alignment is such a hard problem it will take a long time to solve. It seems like the former is the more widely held view.
Could someone either present or point me in the direction of a clear explanation for why many believe AGI is on the horizon. In addition, please correct me if this question demonstrates some misunderstanding on my part.