I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.
Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.
Achieving existential security may require novel approaches. Some have said AI can help us achieve it, others say we need to promote international cooperation, and others say we may need to maximise economic growth or technological progress to speed through the time of perils. These approaches may seem lacking to a thoughtful shorttermist who may prefer reducing specific risks.
Maybe, I mean I’ve been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he’s basically right that it can push people towards ideas that don’t have any guard rails.
A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.
That after all is what shutting up and multiplying tells you—so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.
Of course there is also the other direction: If there was a 1⁄1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.
Also, of course, model error, and any estimate where someone actually uses numbers like ‘1/1 trillion’ that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.
I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.
Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.
Achieving existential security may require novel approaches. Some have said AI can help us achieve it, others say we need to promote international cooperation, and others say we may need to maximise economic growth or technological progress to speed through the time of perils. These approaches may seem lacking to a thoughtful shorttermist who may prefer reducing specific risks.
Maybe, I mean I’ve been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he’s basically right that it can push people towards ideas that don’t have any guard rails.
A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.
That after all is what shutting up and multiplying tells you—so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.
Of course there is also the other direction: If there was a 1⁄1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.
Also, of course, model error, and any estimate where someone actually uses numbers like ‘1/1 trillion’ that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.