[Question] Can we convince people to work on AI safety without convincing them about AGI happening this century?

Context for the question

I recently had a call with someone working in the AI/​x-risk space that he thinks we can convince more people to work on AI safety-related efforts without needing to convince them that artificial general intelligence (AGI) will be achieved within this century. He didn’t expound on reasons why, and this person is quite busy, so I’d rather poll forum readers instead to answer my question above.

The view on AGI of AI researchers in the EA community vs. those outside

I ask this because even if many EAs in the AI risk space think that AGI will likely be achieved within this century (and I imagine that the median view among EAs in this space is that there’s a 50% chance AGI will be created by 2050), this view is still contentious in the mainstream AI community (and in mainstream media generally). However, this person I had a call with said that more AI researchers are paying attention now to AI safety thanks to various efforts/​reasons, so he thinks it’s easier now to get people to work on safety (i.e. make AI systems more explainable and safe) without needing to convince them about AGI. I can also imagine that it could be easier to convince AI researchers to do AI safety-related work without trying to convince them about AGI happening this century.

My experience interviewing an AI professor in the Philippines

I can sense that the AGI view within this century is contentious because I recently interviewed a leading AI professor/​researcher in the Philippines, and he think we won’t achieve AGI within this century (and he thinks that it’s still far away). I don’t know any AI researchers from the Philippines yet (where I’m from) who share the view that AGI will be created within this century, and I would imagine it would be hard to find AI researchers locally who already have similar views to EAs about AGI. However, the professor told me that he is interested in doing a research project related to making AI models more explainable, and that he also wants to be able to train AI models without needing large amounts of compute. I could sense that making AI models more explainable helps towards AI safety research (I don’t know about if training AI models without needing large amounts of compute is safety related—probably not?).

Crowdsourcing resources/​thoughts on this question

However, I’d love more people to tell me if they think we can grow the quantity and quality of efforts of the AI safety community by focusing on arguments as to why AI should be explainable and safe, and not focus on trying to convince people that AGI will happen this century. If anyone can point me to resources or content that tries to convince people to work on AI safety without making the case for AGI happening this century, that would be great. Thanks!

No comments.