I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.
I think I’d plausibly say the same thing for my other examples; I’d have to think a bit more about the actual probabilities involved.
That’s fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don’t have to lie to people about having voted!
When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
Given that you seem to agree voting is fanatical, I’m guessing you want to consider the probability that an individual’s actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary.
If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.
but why should the locus of agency be the individual? Seems pretty arbitrary.
Hmm well aren’t we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
If you agree that voting is fanatical, do you also agree that activism is fanatical?
Pretty much yes. To clarify—I have never said I’m against acting fanatically. I think the arguments for acting fanatically, particularly the one in this paper, are very strong. That said, something like a Pascal’s mugging does seem a bit ridiculous to me (but I’m open to the possibility I should hand over the money!).
Hmm well aren’t we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
We’re all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also “What counts as death?”.) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call “me in the past/future” and with the spatially-distant brain cognitions that we typically call “other people”. The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it’s fundamentally a quantitative difference, not a qualitative one.
That said, something like a Pascal’s mugging does seem a bit ridiculous to me (but I’m open to the possibility I should hand over the money!).
By “fanatical” I want to talk about the thing that seems weird about Pascal’s mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.
If you agree there’s something weird there and that longtermists don’t generally reason using that weird thing and typically do some other thing instead, that’s sufficient for my claim (b).
I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.
I think I’d plausibly say the same thing for my other examples; I’d have to think a bit more about the actual probabilities involved.
That’s fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don’t have to lie to people about having voted!
When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
Given that you seem to agree voting is fanatical, I’m guessing you want to consider the probability that an individual’s actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary.
If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.
A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.
Hmm well aren’t we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
Pretty much yes. To clarify—I have never said I’m against acting fanatically. I think the arguments for acting fanatically, particularly the one in this paper, are very strong. That said, something like a Pascal’s mugging does seem a bit ridiculous to me (but I’m open to the possibility I should hand over the money!).
We’re all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also “What counts as death?”.) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call “me in the past/future” and with the spatially-distant brain cognitions that we typically call “other people”. The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it’s fundamentally a quantitative difference, not a qualitative one.
By “fanatical” I want to talk about the thing that seems weird about Pascal’s mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.
If you agree there’s something weird there and that longtermists don’t generally reason using that weird thing and typically do some other thing instead, that’s sufficient for my claim (b).
Certainly agree there is something weird there!
Anyway I don’t really think there was too much disagreement between us, but it was an interesting exchange nonetheless!