[Link] The option value of civilization

Linkpost for this letter by “CK”, shared on Marginal Revolution:

I think discounting is the wrong financial metaphor to use when discussing the moral worth of the present vs. the future. Instead, we should look to option pricing theory...

The key idea is that the total moral worth of the universe has some positively skewed distribution: there are more ways for things to be good than there are for it to be bad. Let’s take this as a given for now… [like] the payout profile of a call option...

there’s a fundamental difference between the value of the option, and the value of the underlying. Translated to moral terms, we should distinguish between the value of present, and the ultimate moral worth of the universe...
Let’s start with the question of the value of the present vs. the value of the future. In my view, that language is confused. The value of the future is unknowable and can’t be affected directly. We should stop talking as if we can. We can only affect things like the value of the present and the volatility and overall trajectory of the historical process… In moral terms, delta is interpreted as the derivative of the moral worth of the universe with respect to the value of the present. “How much should we care about the present?” can be restated as “What is the delta [of] the option?”

...If you think the potential value of the future is vastly greater than the value of the present (i.e. if you think our option is only slightly in-the-money) you should care less about the value of the present. But if the option is deep in-the-money — if civilization is secure and of great value — we should care more about increasing its value.

...as volatility increases, delta decreases. In moral terms: the greater the range of historical outcomes, the less we should care about the precise moment we’re in now. If we think history is highly dynamic, that the space of potential outcomes is very large, and that the far future can be vastly more valuable than the present, we should care less about the specific value of the present. Similarly, if we think we’re close to the end of history, we should focus on incremental tweaks to improve the value of the present.

There’s even a note on S-risk options balancing:

...I think it’s vastly more likely for civilization and value to simply be wiped out, than it is for a monstrously evil future to occur. But if you disagree, you can account for it in the option framework. The more likely an evil future, the more symmetric our payout profile. You can think of humanity as owning some combination of a long call and a short put. If our portfolio contains equal positions in each, our total delta is 1 — implying that the value of our options position is identical to the value of the underlying. Translated into moral terms: the more symmetric we think future outcomes are, the more we should care about the present.
...I’m sure someone in the Effective Altruism community has kicked these ideas around; I’m just not aware of it. If you know of any related work, I’d love to be pointed in the right direction.

I know Toby Ord has been playing with a homologous hazard model—but we’ll have to wait for the book(?).