If we know the probabilities with certainty somehow (because God tells us, or whatever) then dogmatism doesnât help us avoid reckless conclusions. But itâs an explanation for how we can avoid most reckless conclusions in practice (itâs why I used the word âloopholeâ, rather than âflawâ). So if someone comes up and utters the Pascalâs mugger line to you on the street in the real world, or maybe if someone makes an argument for very strong longtermism, you could reject it on dogmatic grounds.
On your point about diminishing returns to utility preventing recklessness, I think thatâs a very good point if youâre making decisions for yourself. But what about when youâre doing ethics? So deciding which charities to give to, for example? If some action affecting N individuals has utility X, then some action affecting 2N individuals should have utility 2X. And if you accept that, then suddenly your utility function is unbounded, and you are now open to all these reckless and fanatical thought experiments.
You donât even need a particular view on population ethics for this. |The Pascal mugger could tell you that the people they are threatening to torture/âreward already exist in some alternate reality.
Hm, ok. Couldnât Pascalâs mugger make a claim to actually being God (with some small probability or very weakly plausibly) and upset the discussion? Consider basing dogmatic rejection on something other than the potential quality of claims from the person whose claims you reject. For example, try a heuristic or psychological analysis. You could dogmatically believe that claims of godliness and accurate probabilism are typical expressions of delusions of grandeur.
My pursuit of giving to charity is not unbounded, because I donât perceive an unbounded need. If the charity were meant to drive unbounded increase in the numbers of those receiving charity, that would be a special case, and not one that I would sign up for. But putting aside truly infinite growth of perceived need for the value returned by the wager, in all wagers of this sort that anyone could undertake, they establish a needed level of utility, and compare the risk involved to whatever stakeholders of taking the wager at that utility level against the risks of doing nothing or wagering for less than the required level.
In the case of ethics, you could add an additional bounds on personal risk that you would endure despite the full need of those who could receive your charity. In other words, thereâs only so much risk you would take on behalf of others. How you decide that should be up to you. You could want to help a certain number of people, or reach a specific milestone towards a larger goal, or meet a specific need for everyone, or spend a specific amount of money, or whathaveyou, and recognize that level of charity as worth the risks involved to you of acquiring the corresponding utility. You just have to figure it out beforehand.
If by living 100 years, I could accomplish something significant, but not everything, on behalf of others, that I wanted, but I would not personally enjoy that time, then that subjective decision makes living past 100 years unattractive, if Iâm deciding solely based on my charitable intent. I would not, in fact, live an extra 100 years for such a purpose without meeting additional criteria, but for exampleâs sake, I offered it.
I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldnât even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.
But you havenât really avoided the problem, just re-phrased it slightly. Whatever the amount of money you would be willing to risk for others, then on expected utility terms, it seems better to give it to the mugger, than to an excellent charity, such as the Against Malaria Foundation. In this framing of the problem, the mugger is now effectively robbing the AMF, rather than you, but the problem is still there.
In my understanding, Pascalâs Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Letâs assume that I donât have the money sufficient for that donation, and have no other way to get that money. Ever. I donât care to spend the money I do have on anything else. Then, thinking altruistically, Iâll keep negotiating with Pascalâs Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable donation. All Iâve done is establish what amount to get in return from the Mugger before I give the mugger my wallet cash. Whether the mugger is my only source of extra money, and whether there is any other risk in losing the money I do have, and whether I already have enough money to make some difference if I donate, is not in question. Notice that some people might object that my choice is irrational. However, the mugger is my only source of money, and I donât have enough money otherwise to do anything that I care about for others, and Iâm not considering consequences to me of losing the money.
In Yudkowskyâs formulation, the Mugger is threatening to harm a bunch of people, but with very low probability. Ok. Iâm supposed to arrive at an amount that I would give to help those people threatened with that improbable risk, right? In the thought experiment, I am altruistic. I decide what the probability of the Muggerâs threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/â(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesnât have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, arenât people better off if I give that money to charity after all?
You wrote,
âI can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldnât even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.â
The threshold of risk you refer to there is the additional selfish one that I referred to in my last comment, where loss of the money in an altruistic effort deprives me of some personal need that the money could have served, an opportunity cost of wagering for more money with the mugger. That risk could be a high threshold of risk even if the monetary amount is low. Lets say I owe a bookie 5 dollars and if I donât repay theyâll break my legs. Therefore, even though I could give the mugger 5 dollars and in my estimation, save some lives, I wonât. Because the 5 dollars is all I have and I need it to repay the bookie. That personal need to protect myself from the bookie defines that threshold of risk. Or more likely, itâs my rent money, and without it, Iâm turned out onto predatory streets. Or itâs my food money for the week, or my retirement money, or something else that pays for something integral to my well-being. Thatâs when that personal threshold is meaningful.
Many situations could come along offering astronomical altruistic returns, but if taking risks for those returns will incur high personal costs, then Iâm not interested in those returns. This is why someone with a limited income or savings typically shouldnât make bets. Itâs also why Effective Altruismâs betting focus makes no sense for bets with sizes that impact a personâs well-being when the bets are lost. I think itâs also why, in the end, EAâs donât put their money where their mouthes are.
EAâs donât make large bets or they donât make bets that risk their well-being. Their âbig risksâ are not that big, to them. Or they truly have a betting problem, I suppose. Itâs just that EAâs claim that betting money clarifies odds because EAâs start worrying about opportunity costs, but does it? I think the amounts involved donât clarify anything, theyâre not important amounts to the people placing bets. What you end up with is a betting culture, where unimportant bets go on leading to limited impact on bayesian thinking, at best, to compulsive betting and major personal losses, at worst. By the way, Singerâs utilitarian ideal was never to bankrupt people. Actually, it was to accomplish charity cost-effectively, implicitly including personal costs in that calculus (for example, by scaling % income that you give to help charitable causes according to your income size). Just an aside.
âI decide what the probability of the Muggerâs threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/â(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesnât have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, arenât people better off if I give that money to charity after all?â
This is exactly the âdogmaticâ response to the mugger that I am trying to defend in this post! We are in complete agreement, I believe!
For possible problems with this view, see other comments that have been left, especially by MichaelStJules.
Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:
probability that you assign to the Muggerâs threat
probability that the Mugger or a third party assigns to the Muggerâs threat
Although Iâm not a fan of subjective probabilities, that could be because I donât make a lot of wagers.
There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situation has one outcome. The current context could allow multiple outcomes, each matching a different prototypical situation. How do I decide which situation is the âbestâ match?
a fuzzy matching: a percentage quantity showing degree of match between prototype and actual situation. This seems the least intuitive to me. The conflation of multiple types and strengths of evidence (of match) into a single numeric system (for example, that bit of evidence is worth 5%, that is worth 10%) is hard to justify.
a hamming distance: each binary digit is a yes/âno answer to a question. The questions could be partitioned, with the partitions ranked, and then hamming distances calculated for each ranked partition, with answers about the situation in question, and questions about identifying a prototypical situation.
a decision tree: each situation could be checked for specific values of attributes of the actual context, yielding a final âmatches prototypical situation Xâ or âdoesnât match prototypical situation Xâ along different paths of the tree. The decision tree is most intuitive to me, and does not involve any sums.
In this case, the context is one where you decide whether to give any money to the mugger, and the prototypical context is a payment for services or a bribe. If it were me, the fact that the mugger is a mugger on the street yields the belief âdonât giveâ because, even if I gave them the money, theyâd not do whatever it is that they promise anyway. That information would appear in a decision tree, somewhere near the top, as âperson asking for money is a criminal?(Y/âN)â
If we know the probabilities with certainty somehow (because God tells us, or whatever) then dogmatism doesnât help us avoid reckless conclusions. But itâs an explanation for how we can avoid most reckless conclusions in practice (itâs why I used the word âloopholeâ, rather than âflawâ). So if someone comes up and utters the Pascalâs mugger line to you on the street in the real world, or maybe if someone makes an argument for very strong longtermism, you could reject it on dogmatic grounds.
On your point about diminishing returns to utility preventing recklessness, I think thatâs a very good point if youâre making decisions for yourself. But what about when youâre doing ethics? So deciding which charities to give to, for example? If some action affecting N individuals has utility X, then some action affecting 2N individuals should have utility 2X. And if you accept that, then suddenly your utility function is unbounded, and you are now open to all these reckless and fanatical thought experiments.
You donât even need a particular view on population ethics for this. |The Pascal mugger could tell you that the people they are threatening to torture/âreward already exist in some alternate reality.
Hm, ok. Couldnât Pascalâs mugger make a claim to actually being God (with some small probability or very weakly plausibly) and upset the discussion? Consider basing dogmatic rejection on something other than the potential quality of claims from the person whose claims you reject. For example, try a heuristic or psychological analysis. You could dogmatically believe that claims of godliness and accurate probabilism are typical expressions of delusions of grandeur.
My pursuit of giving to charity is not unbounded, because I donât perceive an unbounded need. If the charity were meant to drive unbounded increase in the numbers of those receiving charity, that would be a special case, and not one that I would sign up for. But putting aside truly infinite growth of perceived need for the value returned by the wager, in all wagers of this sort that anyone could undertake, they establish a needed level of utility, and compare the risk involved to whatever stakeholders of taking the wager at that utility level against the risks of doing nothing or wagering for less than the required level.
In the case of ethics, you could add an additional bounds on personal risk that you would endure despite the full need of those who could receive your charity. In other words, thereâs only so much risk you would take on behalf of others. How you decide that should be up to you. You could want to help a certain number of people, or reach a specific milestone towards a larger goal, or meet a specific need for everyone, or spend a specific amount of money, or whathaveyou, and recognize that level of charity as worth the risks involved to you of acquiring the corresponding utility. You just have to figure it out beforehand.
If by living 100 years, I could accomplish something significant, but not everything, on behalf of others, that I wanted, but I would not personally enjoy that time, then that subjective decision makes living past 100 years unattractive, if Iâm deciding solely based on my charitable intent. I would not, in fact, live an extra 100 years for such a purpose without meeting additional criteria, but for exampleâs sake, I offered it.
I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldnât even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.
But you havenât really avoided the problem, just re-phrased it slightly. Whatever the amount of money you would be willing to risk for others, then on expected utility terms, it seems better to give it to the mugger, than to an excellent charity, such as the Against Malaria Foundation. In this framing of the problem, the mugger is now effectively robbing the AMF, rather than you, but the problem is still there.
In my understanding, Pascalâs Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Letâs assume that I donât have the money sufficient for that donation, and have no other way to get that money. Ever. I donât care to spend the money I do have on anything else. Then, thinking altruistically, Iâll keep negotiating with Pascalâs Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable donation. All Iâve done is establish what amount to get in return from the Mugger before I give the mugger my wallet cash. Whether the mugger is my only source of extra money, and whether there is any other risk in losing the money I do have, and whether I already have enough money to make some difference if I donate, is not in question. Notice that some people might object that my choice is irrational. However, the mugger is my only source of money, and I donât have enough money otherwise to do anything that I care about for others, and Iâm not considering consequences to me of losing the money.
In Yudkowskyâs formulation, the Mugger is threatening to harm a bunch of people, but with very low probability. Ok. Iâm supposed to arrive at an amount that I would give to help those people threatened with that improbable risk, right? In the thought experiment, I am altruistic. I decide what the probability of the Muggerâs threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/â(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesnât have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, arenât people better off if I give that money to charity after all?
You wrote,
âI can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldnât even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger.â
The threshold of risk you refer to there is the additional selfish one that I referred to in my last comment, where loss of the money in an altruistic effort deprives me of some personal need that the money could have served, an opportunity cost of wagering for more money with the mugger. That risk could be a high threshold of risk even if the monetary amount is low. Lets say I owe a bookie 5 dollars and if I donât repay theyâll break my legs. Therefore, even though I could give the mugger 5 dollars and in my estimation, save some lives, I wonât. Because the 5 dollars is all I have and I need it to repay the bookie. That personal need to protect myself from the bookie defines that threshold of risk. Or more likely, itâs my rent money, and without it, Iâm turned out onto predatory streets. Or itâs my food money for the week, or my retirement money, or something else that pays for something integral to my well-being. Thatâs when that personal threshold is meaningful.
Many situations could come along offering astronomical altruistic returns, but if taking risks for those returns will incur high personal costs, then Iâm not interested in those returns. This is why someone with a limited income or savings typically shouldnât make bets. Itâs also why Effective Altruismâs betting focus makes no sense for bets with sizes that impact a personâs well-being when the bets are lost. I think itâs also why, in the end, EAâs donât put their money where their mouthes are.
EAâs donât make large bets or they donât make bets that risk their well-being. Their âbig risksâ are not that big, to them. Or they truly have a betting problem, I suppose. Itâs just that EAâs claim that betting money clarifies odds because EAâs start worrying about opportunity costs, but does it? I think the amounts involved donât clarify anything, theyâre not important amounts to the people placing bets. What you end up with is a betting culture, where unimportant bets go on leading to limited impact on bayesian thinking, at best, to compulsive betting and major personal losses, at worst. By the way, Singerâs utilitarian ideal was never to bankrupt people. Actually, it was to accomplish charity cost-effectively, implicitly including personal costs in that calculus (for example, by scaling % income that you give to help charitable causes according to your income size). Just an aside.
When you write:
âI decide what the probability of the Muggerâs threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/â(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesnât have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, arenât people better off if I give that money to charity after all?â
This is exactly the âdogmaticâ response to the mugger that I am trying to defend in this post! We are in complete agreement, I believe!
For possible problems with this view, see other comments that have been left, especially by MichaelStJules.
Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:
probability that you assign to the Muggerâs threat
probability that the Mugger or a third party assigns to the Muggerâs threat
Although Iâm not a fan of subjective probabilities, that could be because I donât make a lot of wagers.
There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situation has one outcome. The current context could allow multiple outcomes, each matching a different prototypical situation. How do I decide which situation is the âbestâ match?
a fuzzy matching: a percentage quantity showing degree of match between prototype and actual situation. This seems the least intuitive to me. The conflation of multiple types and strengths of evidence (of match) into a single numeric system (for example, that bit of evidence is worth 5%, that is worth 10%) is hard to justify.
a hamming distance: each binary digit is a yes/âno answer to a question. The questions could be partitioned, with the partitions ranked, and then hamming distances calculated for each ranked partition, with answers about the situation in question, and questions about identifying a prototypical situation.
a decision tree: each situation could be checked for specific values of attributes of the actual context, yielding a final âmatches prototypical situation Xâ or âdoesnât match prototypical situation Xâ along different paths of the tree. The decision tree is most intuitive to me, and does not involve any sums.
In this case, the context is one where you decide whether to give any money to the mugger, and the prototypical context is a payment for services or a bribe. If it were me, the fact that the mugger is a mugger on the street yields the belief âdonât giveâ because, even if I gave them the money, theyâd not do whatever it is that they promise anyway. That information would appear in a decision tree, somewhere near the top, as âperson asking for money is a criminal?(Y/âN)â