There’s a serious (>10%) risk that we’ll see transformative AI2 within a few years.
In that case it’s not realistic to have sufficient protective measures for the risks in time.
Sufficient protective measures would require huge advances on a number of fronts, including information security that could take years to build up and alignment science breakthroughs that we can’t put a timeline on given the nascent state of the field, so even decades might or might not be enough time to prepare, even given a lot of effort.
If it were all up to me, the world would pause now
Reading the first half of this post, I feel that your views are actually very close to my own. It leaves me wondering how much your conflicts of interest -
I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via my spouse.
- are factoring into why you come down in favour of RSPs (above pausing now) in the end.
Reading the first half of this post, I feel that your views are actually very close to my own. It leaves me wondering how much your conflicts of interest -
- are factoring into why you come down in favour of RSPs (above pausing now) in the end.