I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying āletās take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss outā. Thatās just a guess though. (Maybe Altmanās probability is actually way lower, mine would be, but I donāt think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk heās said in the past.)
I think OpenAI doesnāt actually advocate a āfull-speed ahead approachā in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesnāt come out and do this. They care about their image, and Iām not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldnāt actually be much different than the allegedly āordinaryā view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case weāre talking about a difference in degree here, not a qualitative difference in motives.
Quick notes, a few months later: 1. Now, the alignment team was dissolved. 2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, Iād prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.
I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. Iām just not very sure how many others, who are educated on this topic, would agree with them.
I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying āletās take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss outā. Thatās just a guess though. (Maybe Altmanās probability is actually way lower, mine would be, but I donāt think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk heās said in the past.)
I think OpenAI doesnāt actually advocate a āfull-speed ahead approachā in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesnāt come out and do this. They care about their image, and Iām not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldnāt actually be much different than the allegedly āordinaryā view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case weāre talking about a difference in degree here, not a qualitative difference in motives.
Quick notes, a few months later:
1. Now, the alignment team was dissolved.
2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, Iād prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.
I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. Iām just not very sure how many others, who are educated on this topic, would agree with them.