I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.
Quick notes, a few months later: 1. Now, the alignment team was dissolved. 2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, I’d prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.
I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. I’m just not very sure how many others, who are educated on this topic, would agree with them.
I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.
Quick notes, a few months later:
1. Now, the alignment team was dissolved.
2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, I’d prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.
I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. I’m just not very sure how many others, who are educated on this topic, would agree with them.