Hi, everyone, I’m Muireall. I recently put down some thoughts on weighing the longterm future (https://muireall.space/repugnant/). I suspect something like this has been brought up before, but I haven’t been keeping up with writing on the topic for years. It occurred to me that this forum might be able to help with references or relevant keywords that come to mind. I’d appreciate any thoughts you have.
The idea is that, broadly, if you accept the repugnant conclusion with a “high” threshold (some people consensually alive today don’t meet the “barely worth living” line), I think your expected utility for the longterm future has to take a big hit from negative scenarios. From that perspective, not only is it likely that future civilization will—as do, apparently, we—mistake negative for positive welfare, but also welfare should be put on hold (since apparently near-threshold lives can be productive) as they too invest in favor of the distant intergalactic future (until existential catastrophe comes for them).
In other words, I worry (1) expected-total-utility motivations for longtermism underrate very bad outcomes, and (2) these motivations can put you in the position of continually making Pascalian bets long enough to all but guarantee gambler’s ruin before realizing your astronomical potential value.
Scattered critiques of longtermism exist, but are generally informal, tentative, and limited in scope. This recent comment and its replies were the best directory I could find.
I vaguely recall a named paradox, maybe involving “procrastination” or “patience”, about how an immortal investor never cashes in—and possibly that this was a standard answer to Pascal’s wager/mugging together with some larger (but still tiny) probability of, say, getting hit by a meteor while you’re making the bet. Maybe I just imagined it.
I added a more mathematical note at the end of my post showing what I mean by (2). I think in general it’s more coherent to treat trajectory problems with dynamic programming methods rather than try to integrate expected value over time.
Hi, everyone, I’m Muireall. I recently put down some thoughts on weighing the longterm future (https://muireall.space/repugnant/). I suspect something like this has been brought up before, but I haven’t been keeping up with writing on the topic for years. It occurred to me that this forum might be able to help with references or relevant keywords that come to mind. I’d appreciate any thoughts you have.
The idea is that, broadly, if you accept the repugnant conclusion with a “high” threshold (some people consensually alive today don’t meet the “barely worth living” line), I think your expected utility for the longterm future has to take a big hit from negative scenarios. From that perspective, not only is it likely that future civilization will—as do, apparently, we—mistake negative for positive welfare, but also welfare should be put on hold (since apparently near-threshold lives can be productive) as they too invest in favor of the distant intergalactic future (until existential catastrophe comes for them).
In other words, I worry (1) expected-total-utility motivations for longtermism underrate very bad outcomes, and (2) these motivations can put you in the position of continually making Pascalian bets long enough to all but guarantee gambler’s ruin before realizing your astronomical potential value.
I’ll answer my own question a bit:
Scattered critiques of longtermism exist, but are generally informal, tentative, and limited in scope. This recent comment and its replies were the best directory I could find.
A longtermist critique of “The expected value of extinction risk reduction is positive”, in particular, seems to be the best expression of my worry (1). My points about near-threshold lives and procrastination are another plausible story by which extinction risk reduction could be negative in expectation.
There’s writing about Pascalian reasoning (a couple that came up repeatedly were A Paradox for Tiny Probabilities and Enormous Values, In defence of fanaticism).
I vaguely recall a named paradox, maybe involving “procrastination” or “patience”, about how an immortal investor never cashes in—and possibly that this was a standard answer to Pascal’s wager/mugging together with some larger (but still tiny) probability of, say, getting hit by a meteor while you’re making the bet. Maybe I just imagined it.
I added a more mathematical note at the end of my post showing what I mean by (2). I think in general it’s more coherent to treat trajectory problems with dynamic programming methods rather than try to integrate expected value over time.