On your response to the Pascalâs mugging objection, Iâve seen your argument made before about Pascalâs mugging and strong longtermism (that existential risk is actually very high so weâre not in a Pascal mugging situation at all) but I think that reply misses the point a bit.
When people worry about the strong longtermist argument taking the form of a Pascal mugging, the small probability they are thinking about is not the probability of extinction, it is the probability that the future is enormous.
The controversial question here is: how bad would extinction be?
The strong-longtermist answer to this question is: there is a very small chance that the future contains an astronomical amount of value, so extinction would therefore be astronomically bad in terms of expected value.
Under the strong longtermist point of view, existential risk then sort of automatically dominates all other considerations, because even a tiny shift in existential risk carries enormous expected value.
It is this argument that is said to resemble a Pascal mugging, in which we are threatened/âtempted with a small probability of enormous harm/âreward. And I think this is a very valid objection to strong longtermism. The âsmall probabilityâ involved here is not the probability of extinction, but the small probability of us colonizing the galaxy and filling it with 10^(something big) digital minds.
Pointing out that existential risk is quite high does not undermine this objection to strong longtermism. If anything it makes it stronger, because it reduces the chance that the future is going to be as big as it needs to be for the strong longtermist argument to go through.
Thank you for your response â I think you make a great case! :)
I very much agree that Pascalâs Mugging is relevant to longtermist philosophy,[1] for similar reasons to what youâve stated â like that there is a trade-off between high existential risk and a high expected value of the future.[2]
Iâm just pretty confused about whether this is the point being made by Philosophy Tube. Pascalâs mugging in the video has as an astronomical upside that âSuper Hitlerâ is not bornâbecause his birth would mean that âthe future is doomedâ. She doesnât really address whether the future being big is plausible or not. For me, her argument derives a lot of the force from the implausibility of the infinitesimally small chance of achieving the upside by preventing âSuper Hitlerâ from being born.
And maybe I watched too much with an eye for the relevance of Pascalâs Mugging to longtermist work on existential risk. I donât think your version is very relevant unless existential risk work relies on astronomically large futures, which I donât think much of it does. I think itâs quite a common sense position that a big future is at least plausible. Perhaps not Bostromian 10^42 future lives, but the âmore than a trillion future livesâ that Abigail Thorn uses. If we assume a long-run population of around 10 billion. Then weâd get to 1 trillion people who would have lived in 10*80 = 800 years.[3] That doesnât seem to be an absurd timeframe for humanity to reach. I think most of the longtermist-inspired existential risk research/âefforts still work with futures that only have a median outcome of a trillion future lives.
I should admit at this point that I didnât actually watch the Philosophy Tube video, so canât comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesnât need to rely on the possibility of âBostromianâ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you donât need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply âwell yeah but you donât have to be on board with these sci-fi sounding concepts to support work on existential riskâ is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. Itâs like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if youâre ok with eating meat otherwise. Itâs very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).
I would like to add to this that there is also just the question of how strong a lot of these claims can be.
Maybe the future is super enormous. And maybe me eating sushi tomorrow night at 6pm instead of on Wednesday could have massive repercussions. But it could also have massive repercussions for me to eat sushi on Friday, or something.
A lot of things âcouldâ have massive repercussions. Maybe if I hadnât missed the bus last week, Super Hitler wouldnât have been born.
There are some obvious low-hanging fruits in the world such that they would reduce the risk of catastrophe (say, nuclear disarmament, or the Seed Vault, or something). But there are also a lot of things that seem less obvious in their mechanisms, and which could go radically differently than how the people who outline them seem to think. Interventions to increase the number of liberal democracies on the planet and the amount of education could lead to more political polarization and social instability, for example. Iâm not saying it would, but it could. Places that have been on the receiving end of âdemocratizingâ interventions often wind up more politically unstable or dangerous for a variety of reasons, and the upward trend in education and longevity over the past few decades has also been an upward trend in polarization, depression, anxiety, social isolationâŚ
Sure, maybe thereâs some existential risk to humanity, and maybe the future is massive, but what reason do I have to believe that my eating sushi, or taking public transit, or donating to one charity over another, or reading some book, is actually going to have specific effects? Why wouldnât the unintended consequences outweigh the intended ones?
Itâs not just skepticism about the potential size of the future, itâs skepticism about the cause-effect relationship being provided by the potential âmuggerâ. Maybe weâre 100% doomed and nothing we do will do anything because an asteroid is going to hit us in 50 years that we will not be able to detect due to a serendipitous astronomical occurrence, and all of it is pointless. Maybe some omnipotent deity is watching and will make sure we colonize the galaxy. Maybe research into AI risk will bring about an evil AI. Maybe research into AI is pointless because AIs will necessarily be hyperbenevolent due to some law of the universe we have not yet discovered. Maybe a lot of things.
Even with the dedication and careful thought that I have seen many people put into these probabilities, it always looks to me like there arenât enough variables to be comfortable with any of it. And there are people who donât think about this in quantitative terms who would find even my hypothetical more comprehensive models to be inadequate.
On your response to the Pascalâs mugging objection, Iâve seen your argument made before about Pascalâs mugging and strong longtermism (that existential risk is actually very high so weâre not in a Pascal mugging situation at all) but I think that reply misses the point a bit.
When people worry about the strong longtermist argument taking the form of a Pascal mugging, the small probability they are thinking about is not the probability of extinction, it is the probability that the future is enormous.
The controversial question here is: how bad would extinction be?
The strong-longtermist answer to this question is: there is a very small chance that the future contains an astronomical amount of value, so extinction would therefore be astronomically bad in terms of expected value.
Under the strong longtermist point of view, existential risk then sort of automatically dominates all other considerations, because even a tiny shift in existential risk carries enormous expected value.
It is this argument that is said to resemble a Pascal mugging, in which we are threatened/âtempted with a small probability of enormous harm/âreward. And I think this is a very valid objection to strong longtermism. The âsmall probabilityâ involved here is not the probability of extinction, but the small probability of us colonizing the galaxy and filling it with 10^(something big) digital minds.
Pointing out that existential risk is quite high does not undermine this objection to strong longtermism. If anything it makes it stronger, because it reduces the chance that the future is going to be as big as it needs to be for the strong longtermist argument to go through.
Thank you for your response â I think you make a great case! :)
I very much agree that Pascalâs Mugging is relevant to longtermist philosophy,[1] for similar reasons to what youâve stated â like that there is a trade-off between high existential risk and a high expected value of the future.[2]
Iâm just pretty confused about whether this is the point being made by Philosophy Tube. Pascalâs mugging in the video has as an astronomical upside that âSuper Hitlerâ is not bornâbecause his birth would mean that âthe future is doomedâ. She doesnât really address whether the future being big is plausible or not. For me, her argument derives a lot of the force from the implausibility of the infinitesimally small chance of achieving the upside by preventing âSuper Hitlerâ from being born.
And maybe I watched too much with an eye for the relevance of Pascalâs Mugging to longtermist work on existential risk. I donât think your version is very relevant unless existential risk work relies on astronomically large futures, which I donât think much of it does. I think itâs quite a common sense position that a big future is at least plausible. Perhaps not Bostromian 10^42 future lives, but the âmore than a trillion future livesâ that Abigail Thorn uses. If we assume a long-run population of around 10 billion. Then weâd get to 1 trillion people who would have lived in 10*80 = 800 years.[3] That doesnât seem to be an absurd timeframe for humanity to reach. I think most of the longtermist-inspired existential risk research/âefforts still work with futures that only have a median outcome of a trillion future lives.
I omitted this from an earlier draft of the post. Which in retrospect maybe wasnât a good idea.
Iâm personally confused about this trade-off. If I had a higher p(doom), then Iâd want to have more clarity about this.
Iâm unsure if thatâs a sensible calculation.
I should admit at this point that I didnât actually watch the Philosophy Tube video, so canât comment on how this argument was portrayed there! And your response to that specific portrayal of it might be spot on.
I also agree with you that most existential risk work probably doesnât need to rely on the possibility of âBostromianâ futures (I like that term!) to justify itself. You only need extinction to be very bad (which I think it is), you donât need it to be very very very bad.
But I think there must be some prioritisation decisions where it becomes relevant whether you are a weak longtermist (existental risk would be very bad and is currently neglected) or a strong longtermist (reducing existential risk by a tiny amount has astronomical expected value).
This is also a common line of attack that EA is getting more and more, and I think the reply âwell yeah but you donât have to be on board with these sci-fi sounding concepts to support work on existential riskâ is a reply that people are understandably more suspicious of if they think the person making it is on board with these more sci-fi like arguments. Itâs like when a vegan tries to make the case that a particular form of farming is unnecessarily cruel, even if youâre ok with eating meat otherwise. Itâs very natural to be suspicious of their true motivations. (I say this as a vegan who takes part in welfare campaigns).
I would like to add to this that there is also just the question of how strong a lot of these claims can be.
Maybe the future is super enormous. And maybe me eating sushi tomorrow night at 6pm instead of on Wednesday could have massive repercussions. But it could also have massive repercussions for me to eat sushi on Friday, or something.
A lot of things âcouldâ have massive repercussions. Maybe if I hadnât missed the bus last week, Super Hitler wouldnât have been born.
There are some obvious low-hanging fruits in the world such that they would reduce the risk of catastrophe (say, nuclear disarmament, or the Seed Vault, or something). But there are also a lot of things that seem less obvious in their mechanisms, and which could go radically differently than how the people who outline them seem to think. Interventions to increase the number of liberal democracies on the planet and the amount of education could lead to more political polarization and social instability, for example. Iâm not saying it would, but it could. Places that have been on the receiving end of âdemocratizingâ interventions often wind up more politically unstable or dangerous for a variety of reasons, and the upward trend in education and longevity over the past few decades has also been an upward trend in polarization, depression, anxiety, social isolationâŚ
Sure, maybe thereâs some existential risk to humanity, and maybe the future is massive, but what reason do I have to believe that my eating sushi, or taking public transit, or donating to one charity over another, or reading some book, is actually going to have specific effects? Why wouldnât the unintended consequences outweigh the intended ones?
Itâs not just skepticism about the potential size of the future, itâs skepticism about the cause-effect relationship being provided by the potential âmuggerâ. Maybe weâre 100% doomed and nothing we do will do anything because an asteroid is going to hit us in 50 years that we will not be able to detect due to a serendipitous astronomical occurrence, and all of it is pointless. Maybe some omnipotent deity is watching and will make sure we colonize the galaxy. Maybe research into AI risk will bring about an evil AI. Maybe research into AI is pointless because AIs will necessarily be hyperbenevolent due to some law of the universe we have not yet discovered. Maybe a lot of things.
Even with the dedication and careful thought that I have seen many people put into these probabilities, it always looks to me like there arenât enough variables to be comfortable with any of it. And there are people who donât think about this in quantitative terms who would find even my hypothetical more comprehensive models to be inadequate.