In my understanding, Pascal’s Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Let’s assume that I don’t have the money sufficient for that donation, and have no other way to get that money. Ever. I don’t care to spend the money I do have on anything else. Then, thinking altruistically, I’ll keep negotiating with Pascal’s Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable donation. All I’ve done is establish what amount to get in return from the Mugger before I give the mugger my wallet cash. Whether the mugger is my only source of extra money, and whether there is any other risk in losing the money I do have, and whether I already have enough money to make some difference if I donate, is not in question. Notice that some people might object that my choice is irrational. However, the mugger is my only source of money, and I don’t have enough money otherwise to do anything that I care about for others, and I’m not considering consequences to me of losing the money.
In Yudkowsky’s formulation, the Mugger is threatening to harm a bunch of people, but with very low probability. Ok. I’m supposed to arrive at an amount that I would give to help those people threatened with that improbable risk, right? In the thought experiment, I am altruistic. I decide what the probability of the Mugger’s threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn’t have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren’t people better off if I give that money to charity after all?
You wrote,
“I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn’t even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.”
The threshold of risk you refer to there is the additional selfish one that I referred to in my last comment, where loss of the money in an altruistic effort deprives me of some personal need that the money could have served, an opportunity cost of wagering for more money with the mugger. That risk could be a high threshold of risk even if the monetary amount is low. Lets say I owe a bookie 5 dollars and if I don’t repay they’ll break my legs. Therefore, even though I could give the mugger 5 dollars and in my estimation, save some lives, I won’t. Because the 5 dollars is all I have and I need it to repay the bookie. That personal need to protect myself from the bookie defines that threshold of risk. Or more likely, it’s my rent money, and without it, I’m turned out onto predatory streets. Or it’s my food money for the week, or my retirement money, or something else that pays for something integral to my well-being. That’s when that personal threshold is meaningful.
Many situations could come along offering astronomical altruistic returns, but if taking risks for those returns will incur high personal costs, then I’m not interested in those returns. This is why someone with a limited income or savings typically shouldn’t make bets. It’s also why Effective Altruism’s betting focus makes no sense for bets with sizes that impact a person’s well-being when the bets are lost. I think it’s also why, in the end, EA’s don’t put their money where their mouthes are.
EA’s don’t make large bets or they don’t make bets that risk their well-being. Their “big risks” are not that big, to them. Or they truly have a betting problem, I suppose. It’s just that EA’s claim that betting money clarifies odds because EA’s start worrying about opportunity costs, but does it? I think the amounts involved don’t clarify anything, they’re not important amounts to the people placing bets. What you end up with is a betting culture, where unimportant bets go on leading to limited impact on bayesian thinking, at best, to compulsive betting and major personal losses, at worst. By the way, Singer’s utilitarian ideal was never to bankrupt people. Actually, it was to accomplish charity cost-effectively, implicitly including personal costs in that calculus (for example, by scaling % income that you give to help charitable causes according to your income size). Just an aside.
“I decide what the probability of the Mugger’s threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn’t have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren’t people better off if I give that money to charity after all?”
This is exactly the ‘dogmatic’ response to the mugger that I am trying to defend in this post! We are in complete agreement, I believe!
For possible problems with this view, see other comments that have been left, especially by MichaelStJules.
Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:
probability that you assign to the Mugger’s threat
probability that the Mugger or a third party assigns to the Mugger’s threat
Although I’m not a fan of subjective probabilities, that could be because I don’t make a lot of wagers.
There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situation has one outcome. The current context could allow multiple outcomes, each matching a different prototypical situation. How do I decide which situation is the “best” match?
a fuzzy matching: a percentage quantity showing degree of match between prototype and actual situation. This seems the least intuitive to me. The conflation of multiple types and strengths of evidence (of match) into a single numeric system (for example, that bit of evidence is worth 5%, that is worth 10%) is hard to justify.
a hamming distance: each binary digit is a yes/no answer to a question. The questions could be partitioned, with the partitions ranked, and then hamming distances calculated for each ranked partition, with answers about the situation in question, and questions about identifying a prototypical situation.
a decision tree: each situation could be checked for specific values of attributes of the actual context, yielding a final “matches prototypical situation X” or “doesn’t match prototypical situation X” along different paths of the tree. The decision tree is most intuitive to me, and does not involve any sums.
In this case, the context is one where you decide whether to give any money to the mugger, and the prototypical context is a payment for services or a bribe. If it were me, the fact that the mugger is a mugger on the street yields the belief “don’t give” because, even if I gave them the money, they’d not do whatever it is that they promise anyway. That information would appear in a decision tree, somewhere near the top, as “person asking for money is a criminal?(Y/N)”
In my understanding, Pascal’s Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Let’s assume that I don’t have the money sufficient for that donation, and have no other way to get that money. Ever. I don’t care to spend the money I do have on anything else. Then, thinking altruistically, I’ll keep negotiating with Pascal’s Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable donation. All I’ve done is establish what amount to get in return from the Mugger before I give the mugger my wallet cash. Whether the mugger is my only source of extra money, and whether there is any other risk in losing the money I do have, and whether I already have enough money to make some difference if I donate, is not in question. Notice that some people might object that my choice is irrational. However, the mugger is my only source of money, and I don’t have enough money otherwise to do anything that I care about for others, and I’m not considering consequences to me of losing the money.
In Yudkowsky’s formulation, the Mugger is threatening to harm a bunch of people, but with very low probability. Ok. I’m supposed to arrive at an amount that I would give to help those people threatened with that improbable risk, right? In the thought experiment, I am altruistic. I decide what the probability of the Mugger’s threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn’t have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren’t people better off if I give that money to charity after all?
You wrote,
“I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn’t even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.”
The threshold of risk you refer to there is the additional selfish one that I referred to in my last comment, where loss of the money in an altruistic effort deprives me of some personal need that the money could have served, an opportunity cost of wagering for more money with the mugger. That risk could be a high threshold of risk even if the monetary amount is low. Lets say I owe a bookie 5 dollars and if I don’t repay they’ll break my legs. Therefore, even though I could give the mugger 5 dollars and in my estimation, save some lives, I won’t. Because the 5 dollars is all I have and I need it to repay the bookie. That personal need to protect myself from the bookie defines that threshold of risk. Or more likely, it’s my rent money, and without it, I’m turned out onto predatory streets. Or it’s my food money for the week, or my retirement money, or something else that pays for something integral to my well-being. That’s when that personal threshold is meaningful.
Many situations could come along offering astronomical altruistic returns, but if taking risks for those returns will incur high personal costs, then I’m not interested in those returns. This is why someone with a limited income or savings typically shouldn’t make bets. It’s also why Effective Altruism’s betting focus makes no sense for bets with sizes that impact a person’s well-being when the bets are lost. I think it’s also why, in the end, EA’s don’t put their money where their mouthes are.
EA’s don’t make large bets or they don’t make bets that risk their well-being. Their “big risks” are not that big, to them. Or they truly have a betting problem, I suppose. It’s just that EA’s claim that betting money clarifies odds because EA’s start worrying about opportunity costs, but does it? I think the amounts involved don’t clarify anything, they’re not important amounts to the people placing bets. What you end up with is a betting culture, where unimportant bets go on leading to limited impact on bayesian thinking, at best, to compulsive betting and major personal losses, at worst. By the way, Singer’s utilitarian ideal was never to bankrupt people. Actually, it was to accomplish charity cost-effectively, implicitly including personal costs in that calculus (for example, by scaling % income that you give to help charitable causes according to your income size). Just an aside.
When you write:
“I decide what the probability of the Mugger’s threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn’t have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren’t people better off if I give that money to charity after all?”
This is exactly the ‘dogmatic’ response to the mugger that I am trying to defend in this post! We are in complete agreement, I believe!
For possible problems with this view, see other comments that have been left, especially by MichaelStJules.
Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:
probability that you assign to the Mugger’s threat
probability that the Mugger or a third party assigns to the Mugger’s threat
Although I’m not a fan of subjective probabilities, that could be because I don’t make a lot of wagers.
There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situation has one outcome. The current context could allow multiple outcomes, each matching a different prototypical situation. How do I decide which situation is the “best” match?
a fuzzy matching: a percentage quantity showing degree of match between prototype and actual situation. This seems the least intuitive to me. The conflation of multiple types and strengths of evidence (of match) into a single numeric system (for example, that bit of evidence is worth 5%, that is worth 10%) is hard to justify.
a hamming distance: each binary digit is a yes/no answer to a question. The questions could be partitioned, with the partitions ranked, and then hamming distances calculated for each ranked partition, with answers about the situation in question, and questions about identifying a prototypical situation.
a decision tree: each situation could be checked for specific values of attributes of the actual context, yielding a final “matches prototypical situation X” or “doesn’t match prototypical situation X” along different paths of the tree. The decision tree is most intuitive to me, and does not involve any sums.
In this case, the context is one where you decide whether to give any money to the mugger, and the prototypical context is a payment for services or a bribe. If it were me, the fact that the mugger is a mugger on the street yields the belief “don’t give” because, even if I gave them the money, they’d not do whatever it is that they promise anyway. That information would appear in a decision tree, somewhere near the top, as “person asking for money is a criminal?(Y/N)”