[Link] The option value of civilization

Linkpost for this let­ter by “CK”, shared on Marginal Revolu­tion:

I think dis­count­ing is the wrong fi­nan­cial metaphor to use when dis­cussing the moral worth of the pre­sent vs. the fu­ture. In­stead, we should look to op­tion pric­ing the­ory...

The key idea is that the to­tal moral worth of the uni­verse has some pos­i­tively skewed dis­tri­bu­tion: there are more ways for things to be good than there are for it to be bad. Let’s take this as a given for now… [like] the pay­out pro­file of a call op­tion...

there’s a fun­da­men­tal differ­ence be­tween the value of the op­tion, and the value of the un­der­ly­ing. Trans­lated to moral terms, we should dis­t­in­guish be­tween the value of pre­sent, and the ul­ti­mate moral worth of the uni­verse...
Let’s start with the ques­tion of the value of the pre­sent vs. the value of the fu­ture. In my view, that lan­guage is con­fused. The value of the fu­ture is un­know­able and can’t be af­fected di­rectly. We should stop talk­ing as if we can. We can only af­fect things like the value of the pre­sent and the volatility and over­all tra­jec­tory of the his­tor­i­cal pro­cess… In moral terms, delta is in­ter­preted as the deriva­tive of the moral worth of the uni­verse with re­spect to the value of the pre­sent. “How much should we care about the pre­sent?” can be restated as “What is the delta [of] the op­tion?”

...If you think the po­ten­tial value of the fu­ture is vastly greater than the value of the pre­sent (i.e. if you think our op­tion is only slightly in-the-money) you should care less about the value of the pre­sent. But if the op­tion is deep in-the-money — if civ­i­liza­tion is se­cure and of great value — we should care more about in­creas­ing its value.

...as volatility in­creases, delta de­creases. In moral terms: the greater the range of his­tor­i­cal out­comes, the less we should care about the pre­cise mo­ment we’re in now. If we think his­tory is highly dy­namic, that the space of po­ten­tial out­comes is very large, and that the far fu­ture can be vastly more valuable than the pre­sent, we should care less about the spe­cific value of the pre­sent. Similarly, if we think we’re close to the end of his­tory, we should fo­cus on in­cre­men­tal tweaks to im­prove the value of the pre­sent.

There’s even a note on S-risk op­tions bal­anc­ing:

...I think it’s vastly more likely for civ­i­liza­tion and value to sim­ply be wiped out, than it is for a mon­strously evil fu­ture to oc­cur. But if you dis­agree, you can ac­count for it in the op­tion frame­work. The more likely an evil fu­ture, the more sym­met­ric our pay­out pro­file. You can think of hu­man­ity as own­ing some com­bi­na­tion of a long call and a short put. If our port­fo­lio con­tains equal po­si­tions in each, our to­tal delta is 1 — im­ply­ing that the value of our op­tions po­si­tion is iden­ti­cal to the value of the un­der­ly­ing. Trans­lated into moral terms: the more sym­met­ric we think fu­ture out­comes are, the more we should care about the pre­sent.
...I’m sure some­one in the Effec­tive Altru­ism com­mu­nity has kicked these ideas around; I’m just not aware of it. If you know of any re­lated work, I’d love to be pointed in the right di­rec­tion.

I know Toby Ord has been play­ing with a ho­molo­gous haz­ard model—but we’ll have to wait for the book(?).