I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.
I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.