Hey, good question!
First of all I’d like to recommend Holen’s “What AI companies can do today to help with the most important century”, which is the closest I know to a reputable answer to your question (it doesn’t answer the exact same question).
Also see his other posts, like Spreading messages to help with the most important century about how to approach similar problems (of how to explain about AI risk, and especially, what failure modes to avoid).
My own (non reputable) opinion is that the most important are:
Getting people on board about this being an actual danger (if you think it is)
Noticing that it’s happening because many people are following their own “incentives” such as making profitable products quickly.
I know these aren’t concrete, but I don’t think it’s realistic to meet a CEO and get them to change their plans if they’re not on board about that.
Still, to answer your question concretely, here are my thoughts in order, most important is on top:
Don’t develop new AI capabilities (that bring us maybe closer to “AGI”)
Don’t share capabilities you created
Don’t do things that create a lot more resources to enter the field,
Imagine the hype around ChatGPT
Imagine adding a new AI org which increases race dynamics
[there’s more]
I hope that helps
Welcome to the EA Forum!
Hey, good question!
First of all I’d like to recommend Holen’s “What AI companies can do today to help with the most important century”, which is the closest I know to a reputable answer to your question (it doesn’t answer the exact same question).
Also see his other posts, like Spreading messages to help with the most important century about how to approach similar problems (of how to explain about AI risk, and especially, what failure modes to avoid).
My own (non reputable) opinion is that the most important are:
Getting people on board about this being an actual danger (if you think it is)
Noticing that it’s happening because many people are following their own “incentives” such as making profitable products quickly.
I know these aren’t concrete, but I don’t think it’s realistic to meet a CEO and get them to change their plans if they’re not on board about that.
Still, to answer your question concretely, here are my thoughts in order, most important is on top:
Don’t develop new AI capabilities (that bring us maybe closer to “AGI”)
Don’t share capabilities you created
Don’t do things that create a lot more resources to enter the field,
Imagine the hype around ChatGPT
Imagine adding a new AI org which increases race dynamics
[there’s more]
I hope that helps
Welcome to the EA Forum!