I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying “let’s take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss out”. That’s just a guess though. (Maybe Altman’s probability is actually way lower, mine would be, but I don’t think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk he’s said in the past.)
I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.
I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying “let’s take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss out”. That’s just a guess though. (Maybe Altman’s probability is actually way lower, mine would be, but I don’t think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk he’s said in the past.)
I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.