Since the expected harms from AI are obviously much smaller in expectation in an extreme âstochastic parrotâ world where we donât have to worry at all about X-risk from superintelligent systems, it actually does very much matter whether youâre in that world if youâre proposing a general attempt to block AI progress: if the expected harms from further commercial development of AI are much smaller, they are much more likely to be outweighed by the expected benefits.
I think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) and b) if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant.
Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so Iâm not really sure why your argument holds
âthink this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) â
The *expected* harm can still be much lower, even if the threat is not zero. I also think âthey might get integrated with nuclear command and controlâ naturally suggests much more targeted action than does âthey are close to superintelligent systems and any superintelligent systems is mega dangerous no matter what its designed forâ.
âif we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevantâ
Well, itâs not relevant if X-risk from superintelligence is in fact significant. But I was talking about the world where it isnât. In that world, we possibly shouldnât be pulling against the companies overall at all: merely showing that there are still some harms from their actions is not enough to show that we should be all-things-considered against them. Wind farms impose some externalities on wild-life, but that doesnât mean they are overall bad.
âMoreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so Iâm not really sure why your argument holdsâ
I donât think so. Firstly, people are not always rational. I am suspicious that a lot of the ethics crowds sees AI/âtech companies/âenthusiasm about AI, as a sort of like a symbol of a particular kind of masculinity that they, as a particular kind of American liberal feminists dislike. This in my view, biases them in favor of the harms outweighing the benefits, and is also related to a particular style of US liberal identity politics where once a harm has been identified and associated with white maleness the harmful thing must be rhetorically nuked from orbit, and any attempt to think about trade-offs is pathetic excuse making. Secondly, I think many of the AI safety crowd just really like AI and think its cool: roughly they see it as a symbol of the same kind of stuff as the enemies, itâs just, they like that stuff, and its tied up with their self-esteem. Secondly, I think many of them hope strongly for something like paradise/âimmortality through âgood AIâ just as much as they fear the bad stuff. Maybe thatâs all excessively cynical, and I donât hold the first view about the ethics people all that strongly, but I think a wider âpeople are not always rationalâ point applies. In particular, people are often quite scope-insensitive. So just because Bob and Alice both think Xs is harmful, but Aliceâs view implies it is super-mega deadly harmful, and Bobâs view just that it is pretty harmful, doesnât necessarily mean Bob will denounce it less passionately than Alice.
âexpected harm can be still much lowerâ this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isnât that under all ethical theories this difference doesnât matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little. This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms arenât that important, then this essay simply isnât addressed to you.
I think this is the core point Iâm making. It is not that the stochastic parrots vs superintelligence distinction is necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason, then the distinction stops mattering very much.
Since the expected harms from AI are obviously much smaller in expectation in an extreme âstochastic parrotâ world where we donât have to worry at all about X-risk from superintelligent systems, it actually does very much matter whether youâre in that world if youâre proposing a general attempt to block AI progress: if the expected harms from further commercial development of AI are much smaller, they are much more likely to be outweighed by the expected benefits.
I think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) and b) if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant. Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so Iâm not really sure why your argument holds
âthink this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) â
The *expected* harm can still be much lower, even if the threat is not zero. I also think âthey might get integrated with nuclear command and controlâ naturally suggests much more targeted action than does âthey are close to superintelligent systems and any superintelligent systems is mega dangerous no matter what its designed forâ.
âif we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevantâ
Well, itâs not relevant if X-risk from superintelligence is in fact significant. But I was talking about the world where it isnât. In that world, we possibly shouldnât be pulling against the companies overall at all: merely showing that there are still some harms from their actions is not enough to show that we should be all-things-considered against them. Wind farms impose some externalities on wild-life, but that doesnât mean they are overall bad.
âMoreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so Iâm not really sure why your argument holdsâ
I donât think so. Firstly, people are not always rational. I am suspicious that a lot of the ethics crowds sees AI/âtech companies/âenthusiasm about AI, as a sort of like a symbol of a particular kind of masculinity that they, as a particular kind of American liberal feminists dislike. This in my view, biases them in favor of the harms outweighing the benefits, and is also related to a particular style of US liberal identity politics where once a harm has been identified and associated with white maleness the harmful thing must be rhetorically nuked from orbit, and any attempt to think about trade-offs is pathetic excuse making. Secondly, I think many of the AI safety crowd just really like AI and think its cool: roughly they see it as a symbol of the same kind of stuff as the enemies, itâs just, they like that stuff, and its tied up with their self-esteem. Secondly, I think many of them hope strongly for something like paradise/âimmortality through âgood AIâ just as much as they fear the bad stuff. Maybe thatâs all excessively cynical, and I donât hold the first view about the ethics people all that strongly, but I think a wider âpeople are not always rationalâ point applies. In particular, people are often quite scope-insensitive. So just because Bob and Alice both think Xs is harmful, but Aliceâs view implies it is super-mega deadly harmful, and Bobâs view just that it is pretty harmful, doesnât necessarily mean Bob will denounce it less passionately than Alice.
âexpected harm can be still much lowerâ this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isnât that under all ethical theories this difference doesnât matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little. This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms arenât that important, then this essay simply isnât addressed to you.
I think this is the core point Iâm making. It is not that the stochastic parrots vs superintelligence distinction is necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason, then the distinction stops mattering very much.