The “moral intuition” is clearly not generated by reliable intuitions because it abuses:
a. Incomprehensibly large or small numbers
b. Known cognitive biases
c. Wildly unintuitive premises
This list also applies to prominent arguments for longtermism and existential risk mitigation, right? For example, Greaves & MacAskill think that the charge of fanaticism is one of the most serious problems with strong longtermism, which “tend[s] to involve tiny probabilities of enormous benefits.” To the extent that’s true, the extreme thought experiments seem to capture something significant. If they reveal a failure of utilitarianism, strong longtermism may fail too. (I realize there are other, non-longtermist arguments for x-risk reduction.)
Longtermism does mess with intuitions, but it’s also not basing its legitimacy on a case from intuition. In some ways, it’s the exact opposite: it seems absurd to think that every single life we see today could be nearly insignificant when compared to the vast future, and yet this is what one line of reasoning tells us.
I think this gets at something important, but:
This list also applies to prominent arguments for longtermism and existential risk mitigation, right? For example, Greaves & MacAskill think that the charge of fanaticism is one of the most serious problems with strong longtermism, which “tend[s] to involve tiny probabilities of enormous benefits.” To the extent that’s true, the extreme thought experiments seem to capture something significant. If they reveal a failure of utilitarianism, strong longtermism may fail too. (I realize there are other, non-longtermist arguments for x-risk reduction.)
Longtermism does mess with intuitions, but it’s also not basing its legitimacy on a case from intuition. In some ways, it’s the exact opposite: it seems absurd to think that every single life we see today could be nearly insignificant when compared to the vast future, and yet this is what one line of reasoning tells us.