In extremely high-stakes scenarios, it’s ok not to maximise expected utility
EA is associated with maximising expected utility. Which leads to counterintuitive recommendations—such as (repeatedly) pressing a button that destroys the world with 99.99% chance but has a 1% chance of creating a (10 000+)-times better world. I claim that:
the assumptions underlying expected utility maximisation do not apply to these scenarios,
it’s ok if you don’t want to press that button,
it’s ok if EA is not about pressing that button,
[and aaaaaah, for f...s sake, stop associating EA with pressing that button].[1]
More detailed explanation:
An argument for maximising expected utility goes something like this:
Suppose you were risk averse—for example, suppose you prefferred (A) a >90% chance of $1 to (B) a 1% chance of $1000. Well, then I expose you to a series of 100 bets like this, and [apply probability theory] suddenly we are comparing (A’) a >90% chance of $90 with (B’) a >90% chance of $100K. Presumably you don’t feel so happy about choosing (A) now, eh? Maybe you want to go back on this risk aversion thing, and start maximising expected utility with every decision, like a proper Utilitarian?
However, this argument critically relies on repeating the bets. If there is only ever one bet, or a small number of them, Central Limit Theorem does not apply!
In particular:
In your personal life, there is nothing incoherent about being risk averse with respect to your overall happiness, taken over your whole life.
In terms of near-term social impact, there is nothing incoherent about being risk averse with respect to the planetary scale.
In longtermism, there is nothing incoherent about being risk averse with respect to the scale of the future trajectories of the universe.
[Also: uuuhm, maaaybe don’t rely on things that you don’t understand, particularly if they start telling you to jump off a cliff?]
Disclaimer: There is a hypothetical version of this post that includes references, is polished, isn’t a rant, has a better example of risk aversion, makes all the disclaimers such as “yeah yeah, I know most people’s opinions are more nuanced than what I make them”, and that I ran through Granterly a few times. Sadly, that version is less fun, more work, and doesn’t exist. So apologies to all offended readers.
Edit: changed the title to better reflect the main claim.
- ^
Insert countless links here. Including by philosophers much smarter than me, who really should know better.
I like your disclaimer, because I wanted to make a similar post, but since I’m a perfectionist I haven’t even started yet 😅 (Edit: I’m not sure about the downvotes, I’m saying I think OP is doing better than me)
Agreed and strongly upvoted.
It may not be incoherent to be risk averse, but there are instances in which it is not expected utility maximizing. If you have a 51% chance of world doubling, the expected utility is greater if you take the bet. That’s true, no matter how many times it was previously offered. I don’t quite understand why the CLT is relevant.
Yes, the expected utility is larger. The claim is that there is nothing incoherent about not maximising expected utility in this case.
To try rephrasing: Principle 1: if you have to choose a X% chance of getting some outcome A, and a >=X% chance of a strictly better outcome B, you should take B. Principle 2: if you will be facing a long series of comparably significant choices, you should decide each of them based on expected utility maximisation. Principle 3: you should do expected utility maximisation for every single choice. Even if that is the last/most important choice you will ever make.
The claim is that: P1 is solid. P2 follows from P1 (via Central Limit Theorem, or whatever math). But P3 does not follow from P1/P2, and there will be cases where it might be justified to not obey P3. (Like the case with 51% chance of doubling the world’s goodness, 49% chance of destroying it.)
Note that I am not claiming it’s wrong to do expected utility maximisation in all scenarios. Just saying that it both doing and not doing it is OK. And therefore it is (very?) non-strategic to associate your philosophical movement with it. (Given that most people’s intuitions seem to be against it.)
Does this explanation make sense? Maybe I should change the title to something with expected utility?
I don’t think EV reasoning relies on CLT and many repeated incidences. Probabilities (in the important decision-guiding sense) are features of the map, not the territory.
Yes, sure, probabilities are only in the map. But I don’t think that matters for this. Or I just don’t see what argument you are making here. (CLT is in the map, expectations are taken in the map, and decisions are made in the map (then somehow translated into the territory via actions). I don’t see how that says anything about what EV reasoning relies on.)
I’ll respond with 2 orthonormal comments on a real life use of extremism, then moderation.
Also, money-pumping does happen, it’s why payday loans exist and stock picking exists (while the EMH can’t be literally true, you need very expensive strategies to beat the EMH.)
More generally, organizations like the Long Now are really reliant on no distributional shift, which in my view is probably not going to happen, and if that happens they’re useless (like so many other attempts to shift the long-term future.
Agree with Acylhalide’s point—you only need to be non-Dutchbookable by bets that you could actually be exposed to.
To address a potential misunderstanding: I agree with both Sharmake’s examples. But they don’t imply you have to maximise expected utility always. Just when the assumptions apply.
More generally: expected utility maximisation is an instrumental principle. But it is justified by some assumptions, which don’t always hold.
I think the assumptions are usually true, though if they involve one-shot situations things change drastically.
Aren’t x-risk interventions and causes basically one-shot?
Uhm, yes, that seems right. That’s why this matters.
I do not agree, because in most cases bets are repeated bets, so this doesn’t apply. (I also think by default the long-term future has extreme outcomes relative to today, but that’s a different case.)
Do not be moderate in your actions (usually) go extreme.