I think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) and b) if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant.
Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I’m not really sure why your argument holds
’think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) ‘
The *expected* harm can still be much lower, even if the threat is not zero. I also think ‘they might get integrated with nuclear command and control’ naturally suggests much more targeted action than does “they are close to superintelligent systems and any superintelligent systems is mega dangerous no matter what its designed for”.
’if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant’
Well, it’s not relevant if X-risk from superintelligence is in fact significant. But I was talking about the world where it isn’t. In that world, we possibly shouldn’t be pulling against the companies overall at all: merely showing that there are still some harms from their actions is not enough to show that we should be all-things-considered against them. Wind farms impose some externalities on wild-life, but that doesn’t mean they are overall bad.
‘Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I’m not really sure why your argument holds’
I don’t think so. Firstly, people are not always rational. I am suspicious that a lot of the ethics crowds sees AI/tech companies/enthusiasm about AI, as a sort of like a symbol of a particular kind of masculinity that they, as a particular kind of American liberal feminists dislike. This in my view, biases them in favor of the harms outweighing the benefits, and is also related to a particular style of US liberal identity politics where once a harm has been identified and associated with white maleness the harmful thing must be rhetorically nuked from orbit, and any attempt to think about trade-offs is pathetic excuse making. Secondly, I think many of the AI safety crowd just really like AI and think its cool: roughly they see it as a symbol of the same kind of stuff as the enemies, it’s just, they like that stuff, and its tied up with their self-esteem. Secondly, I think many of them hope strongly for something like paradise/immortality through ‘good AI’ just as much as they fear the bad stuff. Maybe that’s all excessively cynical, and I don’t hold the first view about the ethics people all that strongly, but I think a wider ‘people are not always rational’ point applies. In particular, people are often quite scope-insensitive. So just because Bob and Alice both think Xs is harmful, but Alice’s view implies it is super-mega deadly harmful, and Bob’s view just that it is pretty harmful, doesn’t necessarily mean Bob will denounce it less passionately than Alice.
‘expected harm can be still much lower’ this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isn’t that under all ethical theories this difference doesn’t matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little. This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms aren’t that important, then this essay simply isn’t addressed to you.
I think this is the core point I’m making. It is not that the stochastic parrots vs superintelligence distinction is necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason, then the distinction stops mattering very much.
I think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) and b) if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant. Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I’m not really sure why your argument holds
’think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) ‘
The *expected* harm can still be much lower, even if the threat is not zero. I also think ‘they might get integrated with nuclear command and control’ naturally suggests much more targeted action than does “they are close to superintelligent systems and any superintelligent systems is mega dangerous no matter what its designed for”.
’if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant’
Well, it’s not relevant if X-risk from superintelligence is in fact significant. But I was talking about the world where it isn’t. In that world, we possibly shouldn’t be pulling against the companies overall at all: merely showing that there are still some harms from their actions is not enough to show that we should be all-things-considered against them. Wind farms impose some externalities on wild-life, but that doesn’t mean they are overall bad.
‘Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I’m not really sure why your argument holds’
I don’t think so. Firstly, people are not always rational. I am suspicious that a lot of the ethics crowds sees AI/tech companies/enthusiasm about AI, as a sort of like a symbol of a particular kind of masculinity that they, as a particular kind of American liberal feminists dislike. This in my view, biases them in favor of the harms outweighing the benefits, and is also related to a particular style of US liberal identity politics where once a harm has been identified and associated with white maleness the harmful thing must be rhetorically nuked from orbit, and any attempt to think about trade-offs is pathetic excuse making. Secondly, I think many of the AI safety crowd just really like AI and think its cool: roughly they see it as a symbol of the same kind of stuff as the enemies, it’s just, they like that stuff, and its tied up with their self-esteem. Secondly, I think many of them hope strongly for something like paradise/immortality through ‘good AI’ just as much as they fear the bad stuff. Maybe that’s all excessively cynical, and I don’t hold the first view about the ethics people all that strongly, but I think a wider ‘people are not always rational’ point applies. In particular, people are often quite scope-insensitive. So just because Bob and Alice both think Xs is harmful, but Alice’s view implies it is super-mega deadly harmful, and Bob’s view just that it is pretty harmful, doesn’t necessarily mean Bob will denounce it less passionately than Alice.
‘expected harm can be still much lower’ this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isn’t that under all ethical theories this difference doesn’t matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little. This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms aren’t that important, then this essay simply isn’t addressed to you.
I think this is the core point I’m making. It is not that the stochastic parrots vs superintelligence distinction is necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason, then the distinction stops mattering very much.