‘A Narrow Path’ was shared on twitter today, written by the staff of ControlAI. I’m still making my way through it, but it is essentially an attempt at laying out a concrete step by step plan to get humanity safely through the development of super intelligence.
Curious to hear what people on this forum think of the proposals.
The series’ introduction:
There is a simple truth—humanity’s extinction is possible. Recent history has also shown us another truth—we can create artificial intelligence (AI) that can rival humanity.
While most AI development is beneficial, artificial superintelligence threatens humanity with extinction. We have no method to currently control an entity with greater intelligence than us. We currently have no ability to predict the intelligence of advanced AIs prior to developing them, and we have incredibly limited methods to accurately measure their competence after development.
We now stand at a time of peril. Companies across the globe are investing to create artificial superintelligence – that they believe will surpass the collective capabilities of all humans. They publicly state that it is not a matter of “if” such superintelligence might exist, but “when”.
We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species. But humanity can choose a different future: there is a narrow path through.
[Linkpost] A Narrow Path—How to Secure our Future
Link post
‘A Narrow Path’ was shared on twitter today, written by the staff of ControlAI. I’m still making my way through it, but it is essentially an attempt at laying out a concrete step by step plan to get humanity safely through the development of super intelligence.
Curious to hear what people on this forum think of the proposals.
The series’ introduction: