Consider time discounting in your cost-benefit calculations of avoiding tail risks
[Epistemic status: bit unendorsed, written quickly]
For many questions of tail risks (covid, nuclear war risk) I’ve seen EAs[1] doing a bunch of modeling of how likely the tail risk is, and what the impact will be, maybe spending 10s of hours on it. And then when it comes time to compare that tail risk vs short term tradeoffs, they do some incredibly naive multiplication, like life expectancy * risk of death.
Maybe if you view yourself as a pure hedonist with zero discounting, but that’s a hell of a mix. I at least care significantly about impact in my life, and the impact part of my life has heavy instrumental discounting. If I’m working on growing the EA community, which is exponentially growing at ~20% per year, then I should discount my future productivity at 20% per year.
[This section especially might be wrong] So altruistically, I should be weighing the future at a factor of about (not gonna show my work) ∫400e−.2tdt times my productivity for a year, which turns out to be a factor of about 5.
Maybe you disagree with this! Good! But now this important part of the cost-benefit calculation is getting the attention it deserves, rather than being a complete afterthought.
Another interesting way this could diverge from the naive case is not by including discounting, but by considering how much more /​ less impactful you would be if you were one of the survivors of a nuclear war. I lean in the direction of thinking I would be less impactful, but maybe one would be more. This consideration doesn’t apply to long covid, but seems to dominate the nuclear war considerations in my view.
Consider time discounting in your cost-benefit calculations of avoiding tail risks
[Epistemic status: bit unendorsed, written quickly]
For many questions of tail risks (covid, nuclear war risk) I’ve seen EAs[1] doing a bunch of modeling of how likely the tail risk is, and what the impact will be, maybe spending 10s of hours on it. And then when it comes time to compare that tail risk vs short term tradeoffs, they do some incredibly naive multiplication, like life expectancy * risk of death.
Maybe if you view yourself as a pure hedonist with zero discounting, but that’s a hell of a mix. I at least care significantly about impact in my life, and the impact part of my life has heavy instrumental discounting. If I’m working on growing the EA community, which is exponentially growing at ~20% per year, then I should discount my future productivity at 20% per year.
[This section especially might be wrong] So altruistically, I should be weighing the future at a factor of about (not gonna show my work) ∫400e−.2tdt times my productivity for a year, which turns out to be a factor of about 5.
Maybe you disagree with this! Good! But now this important part of the cost-benefit calculation is getting the attention it deserves, rather than being a complete afterthought.
Myself included.
Another interesting way this could diverge from the naive case is not by including discounting, but by considering how much more /​ less impactful you would be if you were one of the survivors of a nuclear war. I lean in the direction of thinking I would be less impactful, but maybe one would be more. This consideration doesn’t apply to long covid, but seems to dominate the nuclear war considerations in my view.