There’s an IMO fairly simple and plausible explanation for why Sam Altman would want to accelerate AI that doesn’t require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.
[ETA: also, presumably, Sam Altman thinks that some level of safety work is good. He just prefers a lower level of safety work/deceleration than a typical EA might recommend.]
It wouldn’t be unusual for him to have such a moral view. If one’s moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.
Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.
Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism.
This suggests people’s expected x-risk levels are really small (‘extreme levels of caution’), which isn’t what people believe.
I think “if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious” would be endorsed by a large majority of the general population & intellectual ‘elite’. It’s not at all a fringe moral position.
I think “if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious” would be endorsed by a large majority of the general population & intellectual ‘elite’.
I’m not sure we disagree. A lot seems to depend on what is meant by “very very cautious”. If it means shutting down AI as a field, I’m pretty skeptical. If it means regulating AI, then I agree, but I also think Sam Altman advocates regulation too.
I agree the general population would probably endorse the statement “if a technology will make humanity go extinct with a probability of 1% or more, be very very cautious” if given to them in a survey of some kind, but I think this statement is vague, and somewhat misleading as a frame for how people would think about AI if they were given more facts about the situation.
Firstly, we’re not merely talking about any technology here; we’re talking about a technology that has the potential to both disempower humans, but also make their lives dramatically better. Almost every technology has risks as well as benefits. Probably the most common method people use when deciding whether to adopt a technology themselves is to check whether the risks outweigh the benefits. Just looking at the risks alone gives a misleading picture.
The relevant statistic is the risk to benefit ratio, and here it’s really not obvious that most people would endorse shutting down AI if they were aware of all the facts. Yes, the risks are high, but so are the benefits.
If elites were made aware of both the risks and the benefits from AI development, most of them seem likely to want to proceed cautiously, rather than not proceed at all, or pause AI for many years, as many EAs have suggested. To test this claim empirically, we can just look at what governments are already doing with regards to AI risk policy, after having been advised by experts; and as far as I can tell, all of the relevant governments are substantially interested in both innovation and safety regulation.
Secondly, there’s a persistent and often large gap between what people say through their words (e.g. when answering surveys) and what they actually want as measured by their behavior. For example, plenty of polling has indicated that a large fraction of people are very cautious regarding GMOs, but in practice most people are willing to eat GM foods happily without much concern. People are often largely thoughtless when answering many types of abstract questions posed to them, especially about topics they have little knowledge about. And this makes sense, because their responses typically have almost no impact on anything that might immediately or directly impact them. Bryan Caplan has discussed these issues in surveys and voting systems before.
I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying “let’s take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss out”. That’s just a guess though. (Maybe Altman’s probability is actually way lower, mine would be, but I don’t think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk he’s said in the past.)
I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.
Quick notes, a few months later: 1. Now, the alignment team was dissolved. 2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, I’d prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.
I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. I’m just not very sure how many others, who are educated on this topic, would agree with them.
You don’t need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.
The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.
That being said, the point that it’s disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one—and one shouldn’t go too far with it in view of general discourse norms. That said, given Altman’s exceptional capability for unilateral action due to his position, it’s reasonable to be at least concerned about it.
There’s an IMO fairly simple and plausible explanation for why Sam Altman would want to accelerate AI that doesn’t require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.
[ETA: also, presumably, Sam Altman thinks that some level of safety work is good. He just prefers a lower level of safety work/deceleration than a typical EA might recommend.]
It wouldn’t be unusual for him to have such a moral view. If one’s moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.
Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.
This suggests people’s expected x-risk levels are really small (‘extreme levels of caution’), which isn’t what people believe.
I think “if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious” would be endorsed by a large majority of the general population & intellectual ‘elite’. It’s not at all a fringe moral position.
I’m not sure we disagree. A lot seems to depend on what is meant by “very very cautious”. If it means shutting down AI as a field, I’m pretty skeptical. If it means regulating AI, then I agree, but I also think Sam Altman advocates regulation too.
I agree the general population would probably endorse the statement “if a technology will make humanity go extinct with a probability of 1% or more, be very very cautious” if given to them in a survey of some kind, but I think this statement is vague, and somewhat misleading as a frame for how people would think about AI if they were given more facts about the situation.
Firstly, we’re not merely talking about any technology here; we’re talking about a technology that has the potential to both disempower humans, but also make their lives dramatically better. Almost every technology has risks as well as benefits. Probably the most common method people use when deciding whether to adopt a technology themselves is to check whether the risks outweigh the benefits. Just looking at the risks alone gives a misleading picture.
The relevant statistic is the risk to benefit ratio, and here it’s really not obvious that most people would endorse shutting down AI if they were aware of all the facts. Yes, the risks are high, but so are the benefits.
If elites were made aware of both the risks and the benefits from AI development, most of them seem likely to want to proceed cautiously, rather than not proceed at all, or pause AI for many years, as many EAs have suggested. To test this claim empirically, we can just look at what governments are already doing with regards to AI risk policy, after having been advised by experts; and as far as I can tell, all of the relevant governments are substantially interested in both innovation and safety regulation.
Secondly, there’s a persistent and often large gap between what people say through their words (e.g. when answering surveys) and what they actually want as measured by their behavior. For example, plenty of polling has indicated that a large fraction of people are very cautious regarding GMOs, but in practice most people are willing to eat GM foods happily without much concern. People are often largely thoughtless when answering many types of abstract questions posed to them, especially about topics they have little knowledge about. And this makes sense, because their responses typically have almost no impact on anything that might immediately or directly impact them. Bryan Caplan has discussed these issues in surveys and voting systems before.
I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying “let’s take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss out”. That’s just a guess though. (Maybe Altman’s probability is actually way lower, mine would be, but I don’t think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk he’s said in the past.)
I think OpenAI doesn’t actually advocate a “full-speed ahead approach” in a strong sense. A hypothetical version of OpenAI that advocated a full speed ahead approach would immediately gut its safety and preparedness teams, advocate subsidies for AI, and argue against any and all regulations that might impede their mission.
Now, of course, there might be political reasons why OpenAI doesn’t come out and do this. They care about their image, and I’m not claiming we should take all their statements at face value. But another plausible theory is simply that OpenAI leaders care about both acceleration and safety. In fact, caring about both safety and acceleration seems quite rational from a purely selfish perspective.
I claim that such a stance wouldn’t actually be much different than the allegedly “ordinary” view that I described previously: that acceleration, rather than pausing or shutting down AI, can be favored in many circumstances.
OpenAI might be less risk averse than average compared to the general public, but in that case we’re talking about a difference in degree here, not a qualitative difference in motives.
Quick notes, a few months later:
1. Now, the alignment team was dissolved.
2. On Advocacy, I think that it might well make more sense for them to effectively lobby via Microsoft. Microsoft owns 49% of OpenAI (at least, the business part, and for some amount of profit cap, whatever that means exactly). If I were Microsoft, I’d prefer to use my well-experienced lobbyists for this sort of thing, rather than to have OpenAI (which I value mainly for their tech integration with Microsoft products), worry about it. I believe that Microsoft is lobbying heavily against AI regulation, though maybe not for many subsidies directly.
I am sympathetic to the view that OpenAI leaders think of themselves as caring about many aspects of safety, and also that they think their stances are reasonable. I’m just not very sure how many others, who are educated on this topic, would agree with them.
I agree that’s possible, but I’m not sure I’ve seen his rhetoric put that view forward in a clear way.
You don’t need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.
The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.
That being said, the point that it’s disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one—and one shouldn’t go too far with it in view of general discourse norms. That said, given Altman’s exceptional capability for unilateral action due to his position, it’s reasonable to be at least concerned about it.