‘expected harm can be still much lower’ this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isn’t that under all ethical theories this difference doesn’t matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little. This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms aren’t that important, then this essay simply isn’t addressed to you.
I think this is the core point I’m making. It is not that the stochastic parrots vs superintelligence distinction is necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason, then the distinction stops mattering very much.
‘expected harm can be still much lower’ this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isn’t that under all ethical theories this difference doesn’t matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little. This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms aren’t that important, then this essay simply isn’t addressed to you.
I think this is the core point I’m making. It is not that the stochastic parrots vs superintelligence distinction is necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason, then the distinction stops mattering very much.