[Question] Why most people in EA are confident that AI will surpass humans?

(I’m an computer science outsider, so there may be a big knowledge gap between us.) Most experts think human-level AGI is just a matter of time and probably occur in 20 years. To conclude the reasons I’ve heard to support AI surpassing human:”Since AI has improved far beyond our expectations in history, and we are putting more and more efforts into developing AI, so we’re optimistic AI’s going to surpass human” I think my main problem is I (and mainstream of society) don’t think “intelligence” is a simple thing. So, if AI systems now are still “far from” human-level intelligence in some aspects(such as doing science research, discovering a meaningful question by its own and design an experiment to prove it), though AI already had an impressive development history, I don’t know how you can be confident(>50%) to say that it’ll surpass human sooner or later. (Unless there’s a theory that said faster computing is the only thing AI lacks now to surpass humam, someone mentioned the algorithms now are already enough for superintelligence AI, I wonder if there are articles talking on this) Or do you think AI’s skill, such as GPT systems, is already near human-level? (It is in some aspects, but in areas such as scientific research, it seems still weak?) And for AI automating itself development in the future, yes it’s possible, but it just means the AI development speed would be much faster, but it doesn’t prove high-level intelligence is probable(>50%) if enough efforts are put. For myself, I am an AI outsider, so I can’t predict the AI timelines. I just don’t know why experts have confidence that the probabilty of AI surpassing human is >50%(this is different from non-expert people, most common people are not confident about this) I’m just talking on my doubts and hope I could know more convincing arguments.