I agree with a lot of the other folk here that risk aversion should not be seen as a selfish drive (even though, as Gleb mentioned, it can serve that drive in some cases), but rather is an important part of rational thinking. In terms of directly answering your question, though, regarding ‘discounting future life’, I’ve been wondering about this a bit too. So, I think it’s fair to say that there are some risks involved with pursuing X-risks: there’s a decent chance you’ll be wrong, you may divert resources from other causes, your donation now may be insignificant compared to future donations when the risk is more well-known and better understood, and you’ll never really know whether or not you’re making any progress. Many of these risks are accurately represented in EA’s cost/benefit models about X-risks (I’m sure yours involved some version of these, even if just the uncertainty one).
My recent worry is the possibility that, that when a given X-risk becomes associated with the EA community, these risks become magnified, which in turn needs to be considered in our analyses. I think that this can happen for three reasons:
First, the EA community could create an echo chamber for incorrect X-risks, which increases bias in support of those X-risks. In this case, rational people who would have otherwise dismissed the risk as conspiratorial now would be more likely to agree with it. We’d like to think that large support of various X-risks in the EA communities is because EAs have more accurate information about these risks, but that’s not necessarily the case. Being in the EA community changes who you see as ‘experts’ on a topic – there isn’t a vocal majority of experts working on AI globally who see the threat as legitimate, which to a normal rational person may make the risk seem a little overblown. However, the vast majority of experts working on AI who associate with EA do see it as a threat, and are very vocal about it. This is a very dangerous situation to be in.
Second, if an ‘incorrect’ X-risk is grasped by the community, there’s a lot of resource diversion at stake – EA has the power to more a lot of resources in a positive way, and if certain X-risks are way off base then their popularity in EA has an outsized opportunity cost.
Lastly, many X-risks turns a lot of reasonable people away from EA, even when they’re correct—if we believe that EA is a great boon to humanity, then the reputational risk has very real implications for the analysis.
Those are my rough initial thoughts, which I’ve elaborated on a bit here. It’s a tricky question though, so I’d love to hear people’s critiques of this line of thinking—is this magnified risk something we should take into account? How would we account for it in models?
I agree with a lot of the other folk here that risk aversion should not be seen as a selfish drive (even though, as Gleb mentioned, it can serve that drive in some cases), but rather is an important part of rational thinking. In terms of directly answering your question, though, regarding ‘discounting future life’, I’ve been wondering about this a bit too. So, I think it’s fair to say that there are some risks involved with pursuing X-risks: there’s a decent chance you’ll be wrong, you may divert resources from other causes, your donation now may be insignificant compared to future donations when the risk is more well-known and better understood, and you’ll never really know whether or not you’re making any progress. Many of these risks are accurately represented in EA’s cost/benefit models about X-risks (I’m sure yours involved some version of these, even if just the uncertainty one).
My recent worry is the possibility that, that when a given X-risk becomes associated with the EA community, these risks become magnified, which in turn needs to be considered in our analyses. I think that this can happen for three reasons:
First, the EA community could create an echo chamber for incorrect X-risks, which increases bias in support of those X-risks. In this case, rational people who would have otherwise dismissed the risk as conspiratorial now would be more likely to agree with it. We’d like to think that large support of various X-risks in the EA communities is because EAs have more accurate information about these risks, but that’s not necessarily the case. Being in the EA community changes who you see as ‘experts’ on a topic – there isn’t a vocal majority of experts working on AI globally who see the threat as legitimate, which to a normal rational person may make the risk seem a little overblown. However, the vast majority of experts working on AI who associate with EA do see it as a threat, and are very vocal about it. This is a very dangerous situation to be in.
Second, if an ‘incorrect’ X-risk is grasped by the community, there’s a lot of resource diversion at stake – EA has the power to more a lot of resources in a positive way, and if certain X-risks are way off base then their popularity in EA has an outsized opportunity cost.
Lastly, many X-risks turns a lot of reasonable people away from EA, even when they’re correct—if we believe that EA is a great boon to humanity, then the reputational risk has very real implications for the analysis.
Those are my rough initial thoughts, which I’ve elaborated on a bit here. It’s a tricky question though, so I’d love to hear people’s critiques of this line of thinking—is this magnified risk something we should take into account? How would we account for it in models?