Maybe, I mean I’ve been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he’s basically right that it can push people towards ideas that don’t have any guard rails.
A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.
That after all is what shutting up and multiplying tells you—so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.
Of course there is also the other direction: If there was a 1⁄1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.
Also, of course, model error, and any estimate where someone actually uses numbers like ‘1/1 trillion’ that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.
Maybe, I mean I’ve been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he’s basically right that it can push people towards ideas that don’t have any guard rails.
A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.
That after all is what shutting up and multiplying tells you—so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.
Of course there is also the other direction: If there was a 1⁄1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.
Also, of course, model error, and any estimate where someone actually uses numbers like ‘1/1 trillion’ that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.