Ben Esche - Effective Altruism forum viewer
https://ea.greaterwrong.com/
Ben Esche - Effective Altruism forum viewerxml-emitteren-usComment by Ben Esche on Mundane trouble with EV / utility
https://ea.greaterwrong.com/posts/jTBk3LPGN4NDGeYhf/mundane-trouble-with-ev-utility#comment-vJtSti8NFeLPeZmLP
<p>Thank you very much—I’m part way through Christian Tarsney’s paper and definitely am finding it interesting. I’ll also have a go at Hilary Greaves piece. Listening to her on 80,000 hours’ podcast was one thing that contributed to asking this question. She seems (at least there) to accept EV as the obviously right decision criterion, but a podcast probably necessitates simplifying her views!</p>Ben EschevJtSti8NFeLPeZmLPSun, 04 Apr 2021 14:04:07 +0000Comment by Ben Esche on Mundane trouble with EV / utility
https://ea.greaterwrong.com/posts/jTBk3LPGN4NDGeYhf/mundane-trouble-with-ev-utility#comment-kkjB8WCTeq4fsTx6d
<p>Thanks very much. I am going to spend some time thinking about the von-Neumann-Mortgenstern theorem. Despite my huge in-built bias towards believing things labelled “von-Neumann”, at an initial scan I found only one of the axioms (transitivity) felt obviously “true” to me about things like “how good is the whole world?”. They all seem true if actually playing games of chance for money of course, which seems to often be the model. But I intend to think about that harder.</p><p>On GiveWell, I think they’re doing an excellent job of trying to answer these questions. I guess I tend to get a bit stuck at the value-judgement level (e.g. how to decide what fraction of a human life a chicken life is worth). But it doesn’t matter much in practice because I can then fall back on a gut-level view and yet still choose a charity from their menu and be confident it’ll be pretty damn good.</p>Ben EschekkjB8WCTeq4fsTx6dSun, 04 Apr 2021 13:53:01 +0000Comment by Ben Esche on Mundane trouble with EV / utility
https://ea.greaterwrong.com/posts/jTBk3LPGN4NDGeYhf/mundane-trouble-with-ev-utility#comment-a6mEFxqchwoiwJpEL
<p>Hi Harrison. I think I agree strongly with (2) and (3) here. I’d argue Infinite expected values that depend on (very) large numbers of trials / bankrolls etc. can and should be ignored. With the Petersburg Paradox as state in the link you included, making any vaguely reasonable assumption about the wealth of the casino, or lifetime of the player, the expected value falls to something much less appealing! This is kind of related to my “saving lives” example in my question—if you only get to play once, the expected value becomes basically irrelevant because the good outcome just actually doesn’t happen. It only starts to be worthwhile when you get to play many times. And hey, maybe you do. If there are 10,000 EAs all doing totally (probabilistically) independent things that each have a 1 in a million chance of some huge payoff, we start to get into realms worth thinking about.</p>Ben Eschea6mEFxqchwoiwJpELSun, 04 Apr 2021 13:52:45 +0000Comment by Ben Esche on Mundane trouble with EV / utility
https://ea.greaterwrong.com/posts/jTBk3LPGN4NDGeYhf/mundane-trouble-with-ev-utility#comment-f6YcywfNb2FGk6ZQH
<p>Hi Larks. Thanks for raising this way of re-framing the point. I think I still disagree, but it’s helpful to see this way of looking at it which I really hadn’t thought of. I still disagree because I am assuming I only get one chance at doing the action and personally I don’t value a 1 in a million chance of being saved higher than zero. I think if I know I’m not going to be faced with the same choice many times, it is better to save 10 people, than to let everyone die and then go around telling people I chose the higher expected value!</p>Ben Eschef6YcywfNb2FGk6ZQHSun, 04 Apr 2021 13:52:28 +0000Comment by Ben Esche on Mundane trouble with EV / utility
https://ea.greaterwrong.com/posts/jTBk3LPGN4NDGeYhf/mundane-trouble-with-ev-utility#comment-ir676YjohcjhY2d7j
<p>Thank you—this is all very interesting. I won’t try to reply to all of it, but just thought I would respond to agree on your last point. I think x-risk is worth caring about precisely because the probability seems to be in the “actually might happen” range. (I don’t believe at all that anyone knows it’s <span class="frac"><sup>1</sup>⁄<sub>6</sub></span> vs. <span class="frac"><sup>1</sup>⁄<sub>10</sub></span> or <span class="frac"><sup>1</sup>⁄<sub>2</sub></span>, but Toby Ord doesn’t claim to either does he?) It’s when you get to the “1 in a million but with a billion payoff” range I start to get skeptical, because then the thing in question actually just won’t happen, barring many plays of the game. </p>Ben Escheir676YjohcjhY2d7jSun, 04 Apr 2021 13:52:09 +0000Comment by Ben Esche on Mundane trouble with EV / utility
https://ea.greaterwrong.com/posts/jTBk3LPGN4NDGeYhf/mundane-trouble-with-ev-utility#comment-wEuy37QBd36S5znKY
<p>Dear All—just a note to say thank you for all the fantastic answers which I will dedicate some time to exploring soon. I posted this and then was offline for a day and am delighted at finding five really thoughtful answers on my return. Thank you all for taking the time to explain these points to me. Seems like this is a pretty awesome forum.</p>Ben EschewEuy37QBd36S5znKYSun, 04 Apr 2021 07:51:21 +0000Mundane trouble with EV / utility by Ben Esche
https://ea.greaterwrong.com/posts/jTBk3LPGN4NDGeYhf/mundane-trouble-with-ev-utility
<p>Hi All</p><p>My first post here—please be kind!</p><p>I’ve recently been listening to and reading various arguments around EA topics that rely heavily on reasoning about a loosely defined notion of “expected value”. Coming from a mathematics background, and having no grounding at all in utilitarianism, this has left me wondering why expected value is used as the default decision rule in evaluating (speculative, future) effects of EA actions even when the probabilities involved are small (and uncertain). </p><p>I have seen a few people here refer to Pascal’s mugging, and I agree with that critique of EV. But it doesn’t seem to me you need to go anywhere near that far before things break down. Naively (?), I’d say that if you invest your resources in an action with a 1 in a million chance of saving a billion lives, and a 999,999 in a million chance of having no effect, then (the overwhelming majority of the time) you haven’t done anything at all. It only works if you can do the action many many times, and get an independent roll of your million-sided dice each time. To take an uncontroversial example, if the lottery was super-generous and paid out £100 trillion, but I had to work for 40 years to get one ticket, the EV of doing that might be, say, £10 million. But I still wouldn’t win. So I’d actually get nothing if I devoted my life to that. Right...? </p><p> I’m hoping the community here might be able to point me to something I could read, or just tell me why this isn’t a problem, and why I should be motivated by things with low probabilities of ever happening but high expected values.</p><p>If anyone feels like humoring me further, I also have some more basic/fundamental doubts which I expect are just things I don’t know about utilitarianism, but which often seem to be taken for granted. At the risk of looking very stupid, here are those:</p><ol><li><p>Why does anyone think human happiness/wellbeing/flourishing/worth-living-ness is a numerical quantity? Based on subjective introspection about my own “utility” I can identify some different states I might be in, and a partial order of preference (I prefer to feel contented than to be in pain but I can’t rank all my possible states), but the idea that I could be, say 50% happier seems to have no meaning at all.</p></li><li><p>If we grant the first thing—that we have numerical values associated with our wellbeing—why does anyone expect there to be a single summary statistic (like a sum total, or an average) that can be used to combine everyone’s individual values to decide which of two possible worlds is better? There seems to be debate about whether “total utilitarianism” is right, or whether some other measure is better. But why should there be such a measure at all?</p></li><li><p>In practice, even if there is such a statistic, how does one use it? It’s hard to deny the obviousness of “two lives saved is better than one”, but as soon as you move to trying to compare unlike things it immediately feels much harder and non-obvious. How am I supposed to use “expected value” to actually compare, in the real world, certainly saving 3 lives with a 40% change of hugely improving the educational level of ten children (assuming these are the end outcomes—I’m not talking about whether educating the kids saves more lives later or something)? And then people want to talk about and compare values for whole future worlds many years hence—wow.</p></li></ol><p>I have a few more I’m less certain about, but I’ll stop for now and see how this lands. Cheers for reading the above. I’ll be very grateful for explanations of why I’m talking nonsense, if you have the time and inclination! </p>Ben EschejTBk3LPGN4NDGeYhfSat, 03 Apr 2021 07:51:36 +0000