(a) AI could be really powerful and important within our lifetimes
and
(b) Building AI too quickly/ incautiously could be dangerous
Could backfire.
But I think just removing the “incautiously” element, and focusing on the “too quickly element”, and adding
(c) So we should direct more resources to AI Safety research
Should be pretty effective in preventing people from thinking that we should race to creating AGI.
So essentially, AI could be really powerful, building it too quickly could be dangerous, we should fund lots of AI Safety research before its invented. I think adding more fidelity / detail / nuance would be net negative, given that they would slow down the spread of the message.
Also, I think we shouldn’t take things OpenAI and DeepMind say at face value, and bear in mind the corrupting influence of the profit motive, motivated reasoning and ‘safetywashing’.
Just because someone says they’re making something that could make them billions of dollars because they think it will benefit humanity, doesn’t mean they’re actually doing it to benefit humanity. What they claim is a race to make safe AGI is probably significantly motivated by a race to make lots of money.
Agree that in isolation, spreading the ideas of
(a) AI could be really powerful and important within our lifetimes
and
(b) Building AI too quickly/ incautiously could be dangerous
Could backfire.
But I think just removing the “incautiously” element, and focusing on the “too quickly element”, and adding
(c) So we should direct more resources to AI Safety research
Should be pretty effective in preventing people from thinking that we should race to creating AGI.
So essentially, AI could be really powerful, building it too quickly could be dangerous, we should fund lots of AI Safety research before its invented. I think adding more fidelity / detail / nuance would be net negative, given that they would slow down the spread of the message.
Also, I think we shouldn’t take things OpenAI and DeepMind say at face value, and bear in mind the corrupting influence of the profit motive, motivated reasoning and ‘safetywashing’.
Just because someone says they’re making something that could make them billions of dollars because they think it will benefit humanity, doesn’t mean they’re actually doing it to benefit humanity. What they claim is a race to make safe AGI is probably significantly motivated by a race to make lots of money.