Hmm well aren’t we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
We’re all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also “What counts as death?”.) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call “me in the past/future” and with the spatially-distant brain cognitions that we typically call “other people”. The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it’s fundamentally a quantitative difference, not a qualitative one.
That said, something like a Pascal’s mugging does seem a bit ridiculous to me (but I’m open to the possibility I should hand over the money!).
By “fanatical” I want to talk about the thing that seems weird about Pascal’s mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.
If you agree there’s something weird there and that longtermists don’t generally reason using that weird thing and typically do some other thing instead, that’s sufficient for my claim (b).
We’re all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also “What counts as death?”.) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call “me in the past/future” and with the spatially-distant brain cognitions that we typically call “other people”. The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it’s fundamentally a quantitative difference, not a qualitative one.
By “fanatical” I want to talk about the thing that seems weird about Pascal’s mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.
If you agree there’s something weird there and that longtermists don’t generally reason using that weird thing and typically do some other thing instead, that’s sufficient for my claim (b).
Certainly agree there is something weird there!
Anyway I don’t really think there was too much disagreement between us, but it was an interesting exchange nonetheless!