I’ll respond with 2 orthonormal comments on a real life use of extremism, then moderation.
Re: COVID, the correct course of action (unless one was psychic) was to be extremely paranoid at the start (trying for total bubbling, sterilizing outside objects, etc) because the EV was very downside-skewed—but as more information came in, to stop worrying about surfaces, start being fine with spacious outdoor gatherings, get a good mask and be comfortable doing some things inside, etc.
That is, a good EA would have been faster than the experts on taking costly preventative acts and faster than the experts on relaxing those where warranted.
Some actual EAs seemed to do this well, and others missed in one direction or the other (there was a lot of rapid group house self-sorting in March/April 2020 over this, and then a slower process afterward).
To be concrete about my model, sterilizing groceries was the right call in March 2020 but not by June 2020 (when we knew it very probably didn’t transmit through surfaces), and overall maximum-feasible alert was the right call in March 2020 but not by June 2020 (when we knew the IFR was low for healthy young people and that the hospitals were not going to be too overwhelmed).
“Be sure the act is effective” is not a good proxy for “take actions based on EV”. In March 2020, the officials were sure (based on a bad model) that COVID wasn’t airborne. We masked up all the same, not because we knew it would be effective but because the chance was large enough for the expected gain to outweigh the cost.
Also, money-pumping does happen, it’s why payday loans exist and stock picking exists (while the EMH can’t be literally true, you need very expensive strategies to beat the EMH.)
More generally, organizations like the Long Now are really reliant on no distributional shift, which in my view is probably not going to happen, and if that happens they’re useless (like so many other attempts to shift the long-term future.
Agree with Acylhalide’s point—you only need to be non-Dutchbookable by bets that you could actually be exposed to.
To address a potential misunderstanding:
I agree with both Sharmake’s examples. But they don’t imply you have to maximise expected utility always. Just when the assumptions apply.
More generally: expected utility maximisation is an instrumental principle. But it is justified by some assumptions, which don’t always hold.
I’ll respond with 2 orthonormal comments on a real life use of extremism, then moderation.
Also, money-pumping does happen, it’s why payday loans exist and stock picking exists (while the EMH can’t be literally true, you need very expensive strategies to beat the EMH.)
More generally, organizations like the Long Now are really reliant on no distributional shift, which in my view is probably not going to happen, and if that happens they’re useless (like so many other attempts to shift the long-term future.
Agree with Acylhalide’s point—you only need to be non-Dutchbookable by bets that you could actually be exposed to.
To address a potential misunderstanding: I agree with both Sharmake’s examples. But they don’t imply you have to maximise expected utility always. Just when the assumptions apply.
More generally: expected utility maximisation is an instrumental principle. But it is justified by some assumptions, which don’t always hold.
I think the assumptions are usually true, though if they involve one-shot situations things change drastically.
Aren’t x-risk interventions and causes basically one-shot?
Uhm, yes, that seems right. That’s why this matters.