A few months ago effective altruism-sympathetic journalist Dylan Matthews wrote this about enthusiasm for AI safety research in Vox:
“At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research.”
I have seen this claim made in various forms several times. While I understand why Dylan Matthews could form this view at a distance from the people involved, I think it is an inaccurate description.
Reason one
The early adopters of the view were more mathematicians, philosophers or interdisciplinary researchers. I personally first became worried about artificial intelligence when the community was much smaller, and was studying genetics and economics. Many computer scientists have now come around, but were if anything relatively resistant compared to people in related fields.
Reason two
More importantly, belief that artificial intelligence presents a big risk is more likely to lead you to despair than a rosy outlook in which you can be a hero. In the early days, most people who were worried about AI felt completely disempowered because i) there was no clear path to a solution, and indeed one may not exist at all, ii) even if a solution exists, most people who were concerned thought they were not personally qualified to do any of the relevant work to solve the problem, including me.
The thought that superintelligent machines may destroy everything you care about, and there may well be nothing you can do about it, is hardly the most appealing belief. Rather than walking into a phone booth and putting on a superhero outfit, many people who read these arguments became anxious and despondent. That remains the case today.
But it is a belief that nevertheless spread, in my view because the underlying arguments, best laid out in the book Superintelligence, are remarkably hard to convincingly rebut.
Reason three
Almost everyone I know who is now especially worried about artificial intelligence initially thought they could use their skills well in another cause areas such as reducing poverty or animal suffering in factory farms. They didn’t need to change the cause area they worked on to feel like they could greatly improve the world.
Even if the facts were true the underlying argument can’t work
Finally, even if it the claim were true, I don’t think it could be a coherent argument against worrying about AI risk.
The reason is this.
Imagine the reverse were the case and it wasn’t the domain experts most qualified to work on the problem who thought it was a big deal. Instead it was chefs, musicians and magazine copy editors who were most concerned. Would that be an argument in favour of worrying more, because such non-field experts could clearly have no self-serving bias that would cause them to worry?
I don’t think so. The concern of non-field experts is clearly less persuasive and I expect almost everyone would agree. Indeed, many people have claimed in the past that the fact that many computer scientists didn’t seem so concerned was a good reason not to be worried.
But if you want to say it’s both unconvincing when field experts like computer scientists are concerned, and also unconvincing when non-field experts like farmers are concerned, congratulations: you’ve just made your view unresponsive to anyone else’s judgement. That’s a very bad place to be.
In fact, computer scientists being concerned about artificial intelligence—and other relevant domain experts such as philosophers, mathematicians, brain scientists and machine learning experts—is, on balance, the strongest piece of evidence available, because they are most likely to know what they are talking about.
Finally, how could the situation ever be otherwise? It’s natural that the first people to notice a new potential problem from a technology are domain experts, or at least people in adjacent domains. I bet the first people who sounded the alarm about potential threats from nuclear weapons were people who were close to relevant physics research. No one else could have been aware of the issue, or able to evaluate the merits of the arguments. I don’t think the fact that physicists were initially the main group worried about the power of nuclear weapons would be a good reason to doubt them.
What biases might really create problems?
The fact that the argument above doesn’t work isn’t evidence that AI really is a problem. The opposite of a wrong idea isn’t a right one. Maybe there are other cognitive biases causing people to exaggerate the problem.
For example, throughout history it has been widespread for people to believe they are living at a particularly crucial time in history, when either a huge disaster, or a revolution gain, could occur. Sometimes they are right (e.g. at the point when nuclear weapons were invented), but usually they have been wrong. People concerned with catastrophic risks call this ‘Millenialist cognitive bias’, discussed here.
But as is usually the case, when it comes to weighing artificial intelligence against other causes there are factors that could bias you both in favour of one view, and others that could bias you in favour of the other view. I don’t find throwing potential biases at people you disagree with to be very helpful, unless you can get evidence about their relative magnitude.
No, CS majors didn’t delude themselves that the best way to save the world is to do CS research
A few months ago effective altruism-sympathetic journalist Dylan Matthews wrote this about enthusiasm for AI safety research in Vox:
“At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research.”
I have seen this claim made in various forms several times. While I understand why Dylan Matthews could form this view at a distance from the people involved, I think it is an inaccurate description.
Reason one
The early adopters of the view were more mathematicians, philosophers or interdisciplinary researchers. I personally first became worried about artificial intelligence when the community was much smaller, and was studying genetics and economics. Many computer scientists have now come around, but were if anything relatively resistant compared to people in related fields.
Reason two
More importantly, belief that artificial intelligence presents a big risk is more likely to lead you to despair than a rosy outlook in which you can be a hero. In the early days, most people who were worried about AI felt completely disempowered because i) there was no clear path to a solution, and indeed one may not exist at all, ii) even if a solution exists, most people who were concerned thought they were not personally qualified to do any of the relevant work to solve the problem, including me.
The thought that superintelligent machines may destroy everything you care about, and there may well be nothing you can do about it, is hardly the most appealing belief. Rather than walking into a phone booth and putting on a superhero outfit, many people who read these arguments became anxious and despondent. That remains the case today.
But it is a belief that nevertheless spread, in my view because the underlying arguments, best laid out in the book Superintelligence, are remarkably hard to convincingly rebut.
Reason three
Almost everyone I know who is now especially worried about artificial intelligence initially thought they could use their skills well in another cause areas such as reducing poverty or animal suffering in factory farms. They didn’t need to change the cause area they worked on to feel like they could greatly improve the world.
Even if the facts were true the underlying argument can’t work
Finally, even if it the claim were true, I don’t think it could be a coherent argument against worrying about AI risk.
The reason is this.
Imagine the reverse were the case and it wasn’t the domain experts most qualified to work on the problem who thought it was a big deal. Instead it was chefs, musicians and magazine copy editors who were most concerned. Would that be an argument in favour of worrying more, because such non-field experts could clearly have no self-serving bias that would cause them to worry?
I don’t think so. The concern of non-field experts is clearly less persuasive and I expect almost everyone would agree. Indeed, many people have claimed in the past that the fact that many computer scientists didn’t seem so concerned was a good reason not to be worried.
But if you want to say it’s both unconvincing when field experts like computer scientists are concerned, and also unconvincing when non-field experts like farmers are concerned, congratulations: you’ve just made your view unresponsive to anyone else’s judgement. That’s a very bad place to be.
In fact, computer scientists being concerned about artificial intelligence—and other relevant domain experts such as philosophers, mathematicians, brain scientists and machine learning experts—is, on balance, the strongest piece of evidence available, because they are most likely to know what they are talking about.
Finally, how could the situation ever be otherwise? It’s natural that the first people to notice a new potential problem from a technology are domain experts, or at least people in adjacent domains. I bet the first people who sounded the alarm about potential threats from nuclear weapons were people who were close to relevant physics research. No one else could have been aware of the issue, or able to evaluate the merits of the arguments. I don’t think the fact that physicists were initially the main group worried about the power of nuclear weapons would be a good reason to doubt them.