If I were to try to convince someone to work on AI safety without convincing them that AGI will happen this century, I’d say things like:
While it may not happen this century, it might.
While it may not happen this century, it’ll probably happen eventually.
It’s extremely important; it’s an x-risk.
We are currently woefully underprepared for it.
It’s going to take a lot of research and policy work to plan for it, work which won’t be done by default.
Currently very few people are doing this work (e.g. there’s more academic papers published on dung beetles than human extinction, AI risk is even more niche, etc. etc. (I may be remembering the example wrong))
There are other big problems, like climate change, nuclear war, etc. but these are both less likely to cause x-risk and also much less neglected.
If I were to try to convince someone to work on AI safety without convincing them that AGI will happen this century, I’d say things like:
While it may not happen this century, it might.
While it may not happen this century, it’ll probably happen eventually.
It’s extremely important; it’s an x-risk.
We are currently woefully underprepared for it.
It’s going to take a lot of research and policy work to plan for it, work which won’t be done by default.
Currently very few people are doing this work (e.g. there’s more academic papers published on dung beetles than human extinction, AI risk is even more niche, etc. etc. (I may be remembering the example wrong))
There are other big problems, like climate change, nuclear war, etc. but these are both less likely to cause x-risk and also much less neglected.
That said, I think I have a good shot of convincing people that there’s a significant chance of AGI this century.