Such as “if I give my wallet over now, I won’t have money to give to a REAL Trass or Smiggledorfian fart goblin later on,” or “if I waste my time and money here, I have less time to ponder the design of the universe and find a flaw in the matrix that allows me to escape and live forever in paradise (or to figure out which supernatural/religious belief is valid and act on that to get infinite utilons).
Again I just think the mugger being able to say any number means they can overcome any alternative (in EV terms).
For example, you can calculate the EV of the possibility of meeting a real wizard in a year and them generating an insane amount of utility. Your EV calculation will spit out some sort of number right? Well the mugger in front of you can beat whatever number that is by saying whatever number they need to to beat it.
Or maybe the EV is undefined because you start talking about infinite utility (as you allude to). In this case EV reasoning pretty much breaks down.
Which brings me to the point that rejecting EV reasoning in the first place can get you out of handing over the wallet. Maybe there’s a problem with EV reasoning, for example when dealing with infinite utilities, or a problem with EV reasoning when probabilities get really small.
Indeed I’m not sure we should actually hand over the wallet because I’m open to the possibility EV reasoning is flawed, at least in this case. See the St. Petersburg Paradox and the Pasadena Game for potential issues with EV reasoning. I still think EV reasoning is pretty useful in more regular circumstances though.
Such as, “Actually, this person is testing me, and will do the exact opposite of what they claim if and only if I do what they are telling me to do.”
My response to this is the same as the one I gave originally to Sanjay. I don’t think this is a compelling argument.
EDIT: Also see Derek Shiller’s comment which I endorse.
For example, you can calculate the EV of the possibility of meeting a real wizard in a year and them generating an insane amount of utility. Your EV calculation will spit out some sort of number right? Well the mugger in front of you can beat whatever number that is by saying whatever number they need to to beat it.
Hence why I wrote out “infinite utilons” with such emphasis and in a previous comment also wrote “If we want to just end this back and forth, we’ll just jump straight to claims of infinity [...]”. But you do continue:
Or maybe the EV is undefined because you start talking about infinite utility (as you allude to). In this case EV reasoning pretty much breaks down.
I disagree (at least insofar as I conceptualize EV and if you are just saying “you can’t compare fractional infinities”), as I already asserted:
a 20% chance of achieving an infinitely good outcome has higher expected value (or, is a better option) than a 10% chance of achieving an infinitely good outcome, ceteris paribus.
Summarizing your thoughts, you offer the following conclusion:
Which brings me to the point that rejecting EV reasoning in the first place can get you out of handing over the wallet. Maybe there’s a problem with EV reasoning, for example when dealing with infinite utilities, or a problem with EV reasoning when probabilities get really small.
Indeed I’m not sure we should actually hand over the wallet because I’m open to the possibility EV reasoning is flawed, at least in this case. See the St. Petersburg Paradox and the Pasadena Game for potential issues with EV reasoning.
I’m also somewhat open to the idea that EV reasoning is flawed, especially my interpretation of the concept. However:
I am very skeptical of rejecting EV reasoning without any kind of replacement, which I don’t think I’ve seen you offer.
This conversation has not moved me closer to believing that I should reject EV reasoning due to Pascalian mugging objections, of which I don’t see any standing/compelling examples. In fact, it is through EV reasoning that I find these objections uncompelling.
The St. Petersburg Paradox and Pasadena Game are interesting objections with which I’m decently familiar (especially the former); I’ve written one or two somewhat long comments about the former some time ago. However, these are also not persuasive, for some of the same reasons I’m articulating here (e.g., it might be equally likely that shoving your $100 bill in some random tree hollow is going to reveal a Korok that breaks the Matrix and gives you infinite utilons than that you are going to flip heads for an infinitely long time).
You know, I actually think I’ve gone completely down the wrong lines with my argument and if I could erase my previous comments I would.
The point is that Pascal’s mugging is a thought experiment to illustrate a point, and I can alter it as required. So I can remove the possibility of amazing alternatives for the money in the wallet if I have to.
E.g. let’s say you have a large amount of money in your wallet and, for whatever reason, you can only use that money to buy yourself stuff you really like. E.g. let’s say it’s a currency that you know can only be taken by a store that sells fancy cars, clothes and houses. Assume you really do know you can’t use the money for anything else (it’s a thought experiment so I can make this restriction).
Now imagine that you run into a Pascal’s mugging. Your “opportunity costs are correlated with outcome magnitudes that the person is claiming” argument no longer applies in this thought experiment. Do you now hand over the wallet and miss out on all of that amazing stuff you could have bought from the store?
I know this all sounds weird, but thought experiments are weird...
Well, with those extra assumptions I would no longer consider it a Pascalian Mugging, I would probably just consider it a prone-to-mislead thought experiment.
Would I take a 1*10−10 chance of getting 1015 utilons over the option of a 100% chance of 1 utilon? Well, if we assume that:
Utilons are the only thing that matter for morality/wellbeing (ignore all moral uncertainty),
I don’t have any additional time to reduce my uncertainty (or I have maximal certainty of the 1*10−10 chance estimate),
The 1*10−10 chance estimate already takes into account all of the outside views, intuitions, and any other arguments that says “this is extremely improbable”,
There’s no such thing as opportunity cost,
Diminishing marginal returns don’t apply,
Psychological and social factors are non-existent,
Your answer to this hypothetical question will not be used to judge your intelligence/character in the real world,
(And how about for good measure, “Assume away any other argument which could dispute the intended conclusion”),
Then the answer is probably yes?
I have a hard-to-articulate set of thoughts on what makes some thought experiments valuable vs. misleading, with one of them being something along the lines of “It’s unhelpful to generate an unrealistic thought experiment and then use the answer/response produced by a framework/system as an argument against using that framework/system in the real world if the argument is (at least implicitly) ‘Wow, look at what an unreasonable answer this could produce for real world problems,’ especially given how people may be prone to misinterpret such evidence (especially due to framing effects) for frameworks/systems that they are unfamiliar with or are already biased against.”
But I don’t have time to get into that can of worms, unfortunately.
Pascal’s mugging is only there to uncover ifthere might be a potential problem with using EV maximisation when probabilities get very small. It’s a thought experiment in decision theory. For that reason I actually think my altered thought experiment is useful, as I think you were introducing complications that distract from this central message of the thought experiment. Pascal’s mugging doesn’t in itself say anything about the relevance of these issues to real life. It may be all a moot point at the end of the day.
It sounds to me as if you don’t see any issues with EV maximisation when probabilities get very small. So in my altered thought experiment you would indeed give away your wallet to some random dude claiming to be a wizard, thereby giving up all those awesome things from the store. It’s worth at least noting that many people wouldn’t do the same, and who is right or wrong is where the interesting conundrum lies.
It’s worth at least noting that many people wouldn’t do the same
I don’t think many people are capable of actually internalizing all of the relevant assumptions that in real life would be totally unreasonable, nor do most people have a really good sense of why they have certain intuitions in the first place. So, it’s not particularly surprising/interesting that people would have very different views on this question.
Again I just think the mugger being able to say any number means they can overcome any alternative (in EV terms).
For example, you can calculate the EV of the possibility of meeting a real wizard in a year and them generating an insane amount of utility. Your EV calculation will spit out some sort of number right? Well the mugger in front of you can beat whatever number that is by saying whatever number they need to to beat it.
Or maybe the EV is undefined because you start talking about infinite utility (as you allude to). In this case EV reasoning pretty much breaks down.
Which brings me to the point that rejecting EV reasoning in the first place can get you out of handing over the wallet. Maybe there’s a problem with EV reasoning, for example when dealing with infinite utilities, or a problem with EV reasoning when probabilities get really small.
Indeed I’m not sure we should actually hand over the wallet because I’m open to the possibility EV reasoning is flawed, at least in this case. See the St. Petersburg Paradox and the Pasadena Game for potential issues with EV reasoning. I still think EV reasoning is pretty useful in more regular circumstances though.
My response to this is the same as the one I gave originally to Sanjay. I don’t think this is a compelling argument.
EDIT: Also see Derek Shiller’s comment which I endorse.
Hence why I wrote out “infinite utilons” with such emphasis and in a previous comment also wrote “If we want to just end this back and forth, we’ll just jump straight to claims of infinity [...]”. But you do continue:
I disagree (at least insofar as I conceptualize EV and if you are just saying “you can’t compare fractional infinities”), as I already asserted:
Summarizing your thoughts, you offer the following conclusion:
I’m also somewhat open to the idea that EV reasoning is flawed, especially my interpretation of the concept. However:
I am very skeptical of rejecting EV reasoning without any kind of replacement, which I don’t think I’ve seen you offer.
This conversation has not moved me closer to believing that I should reject EV reasoning due to Pascalian mugging objections, of which I don’t see any standing/compelling examples. In fact, it is through EV reasoning that I find these objections uncompelling.
The St. Petersburg Paradox and Pasadena Game are interesting objections with which I’m decently familiar (especially the former); I’ve written one or two somewhat long comments about the former some time ago. However, these are also not persuasive, for some of the same reasons I’m articulating here (e.g., it might be equally likely that shoving your $100 bill in some random tree hollow is going to reveal a Korok that breaks the Matrix and gives you infinite utilons than that you are going to flip heads for an infinitely long time).
You know, I actually think I’ve gone completely down the wrong lines with my argument and if I could erase my previous comments I would.
The point is that Pascal’s mugging is a thought experiment to illustrate a point, and I can alter it as required. So I can remove the possibility of amazing alternatives for the money in the wallet if I have to.
E.g. let’s say you have a large amount of money in your wallet and, for whatever reason, you can only use that money to buy yourself stuff you really like. E.g. let’s say it’s a currency that you know can only be taken by a store that sells fancy cars, clothes and houses. Assume you really do know you can’t use the money for anything else (it’s a thought experiment so I can make this restriction).
Now imagine that you run into a Pascal’s mugging. Your “opportunity costs are correlated with outcome magnitudes that the person is claiming” argument no longer applies in this thought experiment. Do you now hand over the wallet and miss out on all of that amazing stuff you could have bought from the store?
I know this all sounds weird, but thought experiments are weird...
Well, with those extra assumptions I would no longer consider it a Pascalian Mugging, I would probably just consider it a prone-to-mislead thought experiment.
Would I take a 1*10−10 chance of getting 1015 utilons over the option of a 100% chance of 1 utilon? Well, if we assume that:
Utilons are the only thing that matter for morality/wellbeing (ignore all moral uncertainty),
I don’t have any additional time to reduce my uncertainty (or I have maximal certainty of the 1*10−10 chance estimate),
The 1*10−10 chance estimate already takes into account all of the outside views, intuitions, and any other arguments that says “this is extremely improbable”,
There’s no such thing as opportunity cost,
Diminishing marginal returns don’t apply,
Psychological and social factors are non-existent,
Your answer to this hypothetical question will not be used to judge your intelligence/character in the real world,
(And how about for good measure, “Assume away any other argument which could dispute the intended conclusion”),
Then the answer is probably yes?
I have a hard-to-articulate set of thoughts on what makes some thought experiments valuable vs. misleading, with one of them being something along the lines of “It’s unhelpful to generate an unrealistic thought experiment and then use the answer/response produced by a framework/system as an argument against using that framework/system in the real world if the argument is (at least implicitly) ‘Wow, look at what an unreasonable answer this could produce for real world problems,’ especially given how people may be prone to misinterpret such evidence (especially due to framing effects) for frameworks/systems that they are unfamiliar with or are already biased against.”
But I don’t have time to get into that can of worms, unfortunately.
Pascal’s mugging is only there to uncover if there might be a potential problem with using EV maximisation when probabilities get very small. It’s a thought experiment in decision theory. For that reason I actually think my altered thought experiment is useful, as I think you were introducing complications that distract from this central message of the thought experiment. Pascal’s mugging doesn’t in itself say anything about the relevance of these issues to real life. It may be all a moot point at the end of the day.
It sounds to me as if you don’t see any issues with EV maximisation when probabilities get very small. So in my altered thought experiment you would indeed give away your wallet to some random dude claiming to be a wizard, thereby giving up all those awesome things from the store. It’s worth at least noting that many people wouldn’t do the same, and who is right or wrong is where the interesting conundrum lies.
I don’t think many people are capable of actually internalizing all of the relevant assumptions that in real life would be totally unreasonable, nor do most people have a really good sense of why they have certain intuitions in the first place. So, it’s not particularly surprising/interesting that people would have very different views on this question.