We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:
They allow long-termism to provide some sort of focal point where people with different allegiances may converge; i.e., being left- or right-wing inclined (probably) does not affect the importance someone assigns to existential risk – though it may influence the trade-off with other values (think about how risk mitigation may impact liberty and equality).
And (maybe there’s a correlation with the previous point) it may allow for disinterested reasoning – i.e., if someone is hyperbolically less self-interested in what will happen in 50 or 100 years, then they would not strongly oppose policies to be implemented in 50 or 100 years – as long as they don’t bear significant costs today.
I think (1) is quite likely acknowledged among EA thinkers, though I don’t recall it being explicitly stated; some may even reply “isn’t it obvious?”, but I don’t believe outsiders would immediately recognize it.
On the other hand, I’m confident (2) is either completely wrong, or not recognized by most people.If it’s true, we could use it to extract from people, in the present, conditional commitments to be enforced in the (relatively) long-term future; e.g., if present investors discount future returns hyperbolically, they wouldn’t oppose something like a Windfall Clause. Maybe Roy’s nuke insurance could benefit from this bias, too.
I wonder if this could be used for institutional design; for instance, creating or reforming organizations is often burdensome, because different interest groups compete to keep or expand their present influence and privileges – e.g., legislators will favor electoral reforms allowing them to be re-elected. Thus, if we could design arrangements to be enforced decades (how long?) after their adoption, without interfering with current status quo, we would eliminate a good deal of its opposition; the problem then subsumes to deciding what kind of arrangements would be useful to design this way, taking into account uncertainty, cluelessness, value shift…
Are there any examples of existing or proposed institutions that try to profit from this short-term vs. long-term bias in a similar way? Is there any research in this line I’m failing to follow? Is it worth a longer post?
(One possibility is that we can’t really do that—this bias is something to be fought, not something we can collectively profit from; so, assuming the hinge of history hypothesis is false, the best we can do is to “transfer resources” from the present to the future, as sovereign funds and patient philanthropy advocates already do)
Can Longtermists “profit” from short-term bias?
We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:
They allow long-termism to provide some sort of focal point where people with different allegiances may converge; i.e., being left- or right-wing inclined (probably) does not affect the importance someone assigns to existential risk – though it may influence the trade-off with other values (think about how risk mitigation may impact liberty and equality).
And (maybe there’s a correlation with the previous point) it may allow for disinterested reasoning – i.e., if someone is hyperbolically less self-interested in what will happen in 50 or 100 years, then they would not strongly oppose policies to be implemented in 50 or 100 years – as long as they don’t bear significant costs today.
I think (1) is quite likely acknowledged among EA thinkers, though I don’t recall it being explicitly stated; some may even reply “isn’t it obvious?”, but I don’t believe outsiders would immediately recognize it.
On the other hand, I’m confident (2) is either completely wrong, or not recognized by most people.If it’s true, we could use it to extract from people, in the present, conditional commitments to be enforced in the (relatively) long-term future; e.g., if present investors discount future returns hyperbolically, they wouldn’t oppose something like a Windfall Clause. Maybe Roy’s nuke insurance could benefit from this bias, too.
I wonder if this could be used for institutional design; for instance, creating or reforming organizations is often burdensome, because different interest groups compete to keep or expand their present influence and privileges – e.g., legislators will favor electoral reforms allowing them to be re-elected. Thus, if we could design arrangements to be enforced decades (how long?) after their adoption, without interfering with current status quo, we would eliminate a good deal of its opposition; the problem then subsumes to deciding what kind of arrangements would be useful to design this way, taking into account uncertainty, cluelessness, value shift…
Are there any examples of existing or proposed institutions that try to profit from this short-term vs. long-term bias in a similar way? Is there any research in this line I’m failing to follow? Is it worth a longer post?
(One possibility is that we can’t really do that—this bias is something to be fought, not something we can collectively profit from; so, assuming the hinge of history hypothesis is false, the best we can do is to “transfer resources” from the present to the future, as sovereign funds and patient philanthropy advocates already do)