Thank you for your response – I think you make a great case! :)
I very much agree that Pascal’s Mugging is relevant to longtermist philosophy,[1] for similar reasons to what you’ve stated – like that there is a trade-off between high existential risk and a high expected value of the future.[2]
I’m just pretty confused about whether this is the point being made by Philosophy Tube. Pascal’s mugging in the video has as an astronomical upside that “Super Hitler” is not born—because his birth would mean that “the future is doomed”. She doesn’t really address whether the future being big is plausible or not. For me, her argument derives a lot of the force from the implausibility of the infinitesimally small chance of achieving the upside by preventing “Super Hitler” from being born.
And maybe I watched too much with an eye for the relevance of Pascal’s Mugging to longtermist work on existential risk. I don’t think your version is very relevant unless existential risk work relies on astronomically large futures, which I don’t think much of it does. I think it’s quite a common sense position that a big future is at least plausible. Perhaps not Bostromian 10^42 future lives, but the ‘more than a trillion future lives’ that Abigail Thorn uses. If we assume a long-run population of around 10 billion. Then we’d get to 1 trillion people who would have lived in 10*80 = 800 years.[3] That doesn’t seem to be an absurd timeframe for humanity to reach. I think most of the longtermist-inspired existential risk research/efforts still work with futures that only have a median outcome of a trillion future lives.
I should admit at this point that I didn’t actually watch the Philosophy Tube video, so can’t comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesn’t need to rely on the possibility of ‘Bostromian’ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you don’t need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply “well yeah but you don’t have to be on board with these sci-fi sounding concepts to support work on existential risk” is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. It’s like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if you’re ok with eating meat otherwise. It’s very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).
Thank you for your response – I think you make a great case! :)
I very much agree that Pascal’s Mugging is relevant to longtermist philosophy,[1] for similar reasons to what you’ve stated – like that there is a trade-off between high existential risk and a high expected value of the future.[2]
I’m just pretty confused about whether this is the point being made by Philosophy Tube. Pascal’s mugging in the video has as an astronomical upside that “Super Hitler” is not born—because his birth would mean that “the future is doomed”. She doesn’t really address whether the future being big is plausible or not. For me, her argument derives a lot of the force from the implausibility of the infinitesimally small chance of achieving the upside by preventing “Super Hitler” from being born.
And maybe I watched too much with an eye for the relevance of Pascal’s Mugging to longtermist work on existential risk. I don’t think your version is very relevant unless existential risk work relies on astronomically large futures, which I don’t think much of it does. I think it’s quite a common sense position that a big future is at least plausible. Perhaps not Bostromian 10^42 future lives, but the ‘more than a trillion future lives’ that Abigail Thorn uses. If we assume a long-run population of around 10 billion. Then we’d get to 1 trillion people who would have lived in 10*80 = 800 years.[3] That doesn’t seem to be an absurd timeframe for humanity to reach. I think most of the longtermist-inspired existential risk research/efforts still work with futures that only have a median outcome of a trillion future lives.
I omitted this from an earlier draft of the post. Which in retrospect maybe wasn’t a good idea.
I’m personally confused about this trade-off. If I had a higher p(doom), then I’d want to have more clarity about this.
I’m unsure if that’s a sensible calculation.
I should admit at this point that I didn’t actually watch the Philosophy Tube video, so can’t comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesn’t need to rely on the possibility of ‘Bostromian’ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you don’t need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply “well yeah but you don’t have to be on board with these sci-fi sounding concepts to support work on existential risk” is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. It’s like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if you’re ok with eating meat otherwise. It’s very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).