I agree with a lot of the other folk here that risk aversion should not be seen as a selfish drive (even though, as Gleb mentioned, it can serve that drive in some cases), but rather is an important part of rational thinking. In terms of directly answering your question, though, regarding ‘discounting future life’, I’ve been wondering about this a bit too. So, I think it’s fair to say that there are some risks involved with pursuing X-risks: there’s a decent chance you’ll be wrong, you may divert resources from other causes, your donation now may be insignificant compared to future donations when the risk is more well-known and better understood, and you’ll never really know whether or not you’re making any progress. Many of these risks are accurately represented in EA’s cost/benefit models about X-risks (I’m sure yours involved some version of these, even if just the uncertainty one).
My recent worry is the possibility that, that when a given X-risk becomes associated with the EA community, these risks become magnified, which in turn needs to be considered in our analyses. I think that this can happen for three reasons:
First, the EA community could create an echo chamber for incorrect X-risks, which increases bias in support of those X-risks. In this case, rational people who would have otherwise dismissed the risk as conspiratorial now would be more likely to agree with it. We’d like to think that large support of various X-risks in the EA communities is because EAs have more accurate information about these risks, but that’s not necessarily the case. Being in the EA community changes who you see as ‘experts’ on a topic – there isn’t a vocal majority of experts working on AI globally who see the threat as legitimate, which to a normal rational person may make the risk seem a little overblown. However, the vast majority of experts working on AI who associate with EA do see it as a threat, and are very vocal about it. This is a very dangerous situation to be in.
Second, if an ‘incorrect’ X-risk is grasped by the community, there’s a lot of resource diversion at stake – EA has the power to more a lot of resources in a positive way, and if certain X-risks are way off base then their popularity in EA has an outsized opportunity cost.
Lastly, many X-risks turns a lot of reasonable people away from EA, even when they’re correct—if we believe that EA is a great boon to humanity, then the reputational risk has very real implications for the analysis.
Those are my rough initial thoughts, which I’ve elaborated on a bit here. It’s a tricky question though, so I’d love to hear people’s critiques of this line of thinking—is this magnified risk something we should take into account? How would we account for it in models?
I just want to push back against your statement that “economists believe that risk aversion is irrational”. In development economics in particular, risk aversion is often seen as a perfectly rational approach to life, especially in cases where the risk is irreversible.
To explain this, I just want to quickly point out that, from an economic standpoint, there’s no correct formal way of measuring risk aversion among utils. Utility is an ordinal, not cardinal, measure. Risk aversion is something that is applied to real measures, like crop yields, in order to better estimate people’s revealed preferences—in essence, risk aversion is a way of taking utility into account when measuring non-utility values.
So, to put this in context, let’s say you are a subsistence farmer, and have an expected yield of X from growing Sorghum or a tuber, and you know that you’ll always roughly get a yield X (since Sorghum and many tubers are crazily resilient), but now someone offers you an ‘improved Maize’ growth package that will get you an expected yield of 2X, but there’s a 10% chance that you’re crops will fail completely. A rational person at the poverty line should always choose the Sorghum/tuber. This is because that 10% chance of a failed crop is much, much worse than could be revealed by expected yield—you could starve, have to sell productive assets, etc. Risk aversion is a way of formalizing the thought process behind this perfectly rational decision. If we could measure expected utility in a cardinal way, we would just do that, and get the correct answer without using risk aversion—but because we can’t measure it cardinally, we have to use risk aversion to account for things like this.
As a last fun point, risk aversion can also be used to formalize the idea of diminishing marginal utility without using cardinal utility functions, which is one of the many ways that we’re able to ‘prove’ that diminishing marginal utility exists, even if we can’t measure it directly.