Hi, everyone, I’m Muireall. I recently put down some thoughts on weighing the longterm future (https://muireall.space/repugnant/). I suspect something like this has been brought up before, but I haven’t been keeping up with writing on the topic for years. It occurred to me that this forum might be able to help with references or relevant keywords that come to mind. I’d appreciate any thoughts you have.
The idea is that, broadly, if you accept the repugnant conclusion with a “high” threshold (some people consensually alive today don’t meet the “barely worth living” line), I think your expected utility for the longterm future has to take a big hit from negative scenarios. From that perspective, not only is it likely that future civilization will—as do, apparently, we—mistake negative for positive welfare, but also welfare should be put on hold (since apparently near-threshold lives can be productive) as they too invest in favor of the distant intergalactic future (until existential catastrophe comes for them).
In other words, I worry (1) expected-total-utility motivations for longtermism underrate very bad outcomes, and (2) these motivations can put you in the position of continually making Pascalian bets long enough to all but guarantee gambler’s ruin before realizing your astronomical potential value.
Hi, everyone, I’m Muireall. I recently put down some thoughts on weighing the longterm future (https://muireall.space/repugnant/). I suspect something like this has been brought up before, but I haven’t been keeping up with writing on the topic for years. It occurred to me that this forum might be able to help with references or relevant keywords that come to mind. I’d appreciate any thoughts you have.
The idea is that, broadly, if you accept the repugnant conclusion with a “high” threshold (some people consensually alive today don’t meet the “barely worth living” line), I think your expected utility for the longterm future has to take a big hit from negative scenarios. From that perspective, not only is it likely that future civilization will—as do, apparently, we—mistake negative for positive welfare, but also welfare should be put on hold (since apparently near-threshold lives can be productive) as they too invest in favor of the distant intergalactic future (until existential catastrophe comes for them).
In other words, I worry (1) expected-total-utility motivations for longtermism underrate very bad outcomes, and (2) these motivations can put you in the position of continually making Pascalian bets long enough to all but guarantee gambler’s ruin before realizing your astronomical potential value.