It might still be better than the counterfactual if an AI arms race was likely to happen soon anyway. I’d prefer the AI leader has some safety red tape (even if it’s largely ignored by leaders and staff) as opposed to being a purely for-profit entity.
Nonetheless, there’s a terrible irony in the organization with the mission “ensure that artificial general intelligence benefits all of humanity” not only kicking off the corporate arms race, but seemingly rushing to win it.
It’s clear that the non-profit wrapper was inadequate to constrain the company. In hindsight, perhaps the right move would have been investing more in AI governance early on, and perhaps seeking to make OpenAI a government body. Though, I expect taking AI risk to DC in 2015 would have been a tough sell.
It might still be better than the counterfactual if an AI arms race was likely to happen soon anyway. I’d prefer the AI leader has some safety red tape (even if it’s largely ignored by leaders and staff) as opposed to being a purely for-profit entity.
Nonetheless, there’s a terrible irony in the organization with the mission “ensure that artificial general intelligence benefits all of humanity” not only kicking off the corporate arms race, but seemingly rushing to win it.
It’s clear that the non-profit wrapper was inadequate to constrain the company. In hindsight, perhaps the right move would have been investing more in AI governance early on, and perhaps seeking to make OpenAI a government body. Though, I expect taking AI risk to DC in 2015 would have been a tough sell.