Most people really don’t want to die, or to be disempowered in their lifetimes. So, for existential risk to be high, there has to be some truly major failure of rationality going on.
… What is surprising about the world having a major failure of rationality? That’s the default state of affairs for anything requiring a modicum of foresight. A fairly core premise of early EA was that there is a truly major failure of rationality going on in the project of trying to improve the world.
Are you surprised that ordinary people spend more money and time on, say, their local sports team, than on anti-aging research? For most of human history, aging had a ~100% chance of killing someone (unless something else killed them first).
I think that most of classic EA vs the rest of the world is a difference in preferences / values, rather than a difference in beliefs. Ditto for someone funding their local sports teams rather than anti-aging research. We’re saying that people are failing in the project of rationally trying to improve the world by as much as possible—but few people really care much or at all about succeeding at that project. (If they cared more, GiveWell would be moving a lot more money than it is.)
In contrast, most people really really don’t want to die in the next ten years, are willing to spend huge amounts of money not to do so, will almost never take actions that they know have a 5% or more chance of killing them, and so on. So, for x-risk to be high, many people (e.g. lab employees, politicians, advisors) have to catastrophically fail at pursuing their own self-interest.
“So, for x-risk to be high, many people (e.g. lab employees, politicians, advisors) have to catastrophically fail at pursuing their own self-interest.”
I don’t think this obviously follows.
Firstly, because the effect of not doing unsafe AI things yourself is seldom that no one else does them, it’s more of a tragedy of the commons type situation right? Especially if there is one leading lab that is irrationally optimistic about safety, which doesn’t seem to require that low a view of human rationality in general.
Secondly, someone like Musk might have a value system where they care a lot about personally capturing the upside of getting to personally aligned superintelligence first, and then they might do dangerous things for the same reason that a risk neutral person will take a 90% chance of instant death and a 10% chance of living to be 10 million over the status quo.
I think that most of classic EA vs the rest of the world is a difference in preferences / values, rather than a difference in beliefs.
I somewhat disagree but I agree this is plausible. (That was more of a side point, maybe I shouldn’t have included it.)
most people really really don’t want to die in the next ten years
Is your claim that they really really don’t want to die in the next ten years, but they are fine dying in the next hundred years? (Else I don’t see how you’re dismissing the anti-aging vs sports team example.)
So, for x-risk to be high, many people (e.g. lab employees, politicians, advisors) have to catastrophically fail at pursuing their own self-interest.
Sure, I mostly agree with this (though I’d note that it can be a failure of group rationality, without being a failure of individual rationality for most individuals). I think people frequently do catastrophically fail to pursue their own self-interest when that requires foresight.
Is your claim that they really really don’t want to die in the next ten years, but they are fine dying in the next hundred years? (Else I don’t see how you’re dismissing the anti-aging vs sports team example.)
Dying when you’re young seems much worse than dying when you’re old for various reasons:
Quality of life is worse when you’re old
When you’re old you will have done much more of what you wanted in life (e.g. have kids and grandkids)
It’s very normal/expected to die when old
Also, I’d imagine people don’t want to fund anti-aging research for various (valid) reasons:
Skepticism it is very cost-effective
Public goods problem means under provision (everyone can benefit from the research even if you don’t fund it yourself)
From a governmental perspective living longer is actually a massive societal issue as it introduces serious fiscal challenges as you need to fund pensions etc. From an individual perspective living longer just means having to work longer to support yourself for longer. So does anyone see anti-aging as that great?
People discount the future
Having said all this, I actually agree with you that x-risk could be fairly high due to a failure of rationality. Primarily because we’ve never gone extinct so people naturally think it’s really unlikely, but x-risk is rising as we get more technologically powerful.
BUT, I agree with Will’s core point that working towards the best possible future is almost certainly more neglected than reducing x-risk, partly because it’s just so wacky. People think about good futures where we are very wealthy and have lots of time to do fun stuff, but do they think about futures where we create loads of digital minds that live maximally-flourishing lives? I doubt it.
… What is surprising about the world having a major failure of rationality? That’s the default state of affairs for anything requiring a modicum of foresight. A fairly core premise of early EA was that there is a truly major failure of rationality going on in the project of trying to improve the world.
Are you surprised that ordinary people spend more money and time on, say, their local sports team, than on anti-aging research? For most of human history, aging had a ~100% chance of killing someone (unless something else killed them first).
I think that most of classic EA vs the rest of the world is a difference in preferences / values, rather than a difference in beliefs. Ditto for someone funding their local sports teams rather than anti-aging research. We’re saying that people are failing in the project of rationally trying to improve the world by as much as possible—but few people really care much or at all about succeeding at that project. (If they cared more, GiveWell would be moving a lot more money than it is.)
In contrast, most people really really don’t want to die in the next ten years, are willing to spend huge amounts of money not to do so, will almost never take actions that they know have a 5% or more chance of killing them, and so on. So, for x-risk to be high, many people (e.g. lab employees, politicians, advisors) have to catastrophically fail at pursuing their own self-interest.
“So, for x-risk to be high, many people (e.g. lab employees, politicians, advisors) have to catastrophically fail at pursuing their own self-interest.”
I don’t think this obviously follows.
Firstly, because the effect of not doing unsafe AI things yourself is seldom that no one else does them, it’s more of a tragedy of the commons type situation right? Especially if there is one leading lab that is irrationally optimistic about safety, which doesn’t seem to require that low a view of human rationality in general.
Secondly, someone like Musk might have a value system where they care a lot about personally capturing the upside of getting to personally aligned superintelligence first, and then they might do dangerous things for the same reason that a risk neutral person will take a 90% chance of instant death and a 10% chance of living to be 10 million over the status quo.
I somewhat disagree but I agree this is plausible. (That was more of a side point, maybe I shouldn’t have included it.)
Is your claim that they really really don’t want to die in the next ten years, but they are fine dying in the next hundred years? (Else I don’t see how you’re dismissing the anti-aging vs sports team example.)
Sure, I mostly agree with this (though I’d note that it can be a failure of group rationality, without being a failure of individual rationality for most individuals). I think people frequently do catastrophically fail to pursue their own self-interest when that requires foresight.
Dying when you’re young seems much worse than dying when you’re old for various reasons:
Quality of life is worse when you’re old
When you’re old you will have done much more of what you wanted in life (e.g. have kids and grandkids)
It’s very normal/expected to die when old
Also, I’d imagine people don’t want to fund anti-aging research for various (valid) reasons:
Skepticism it is very cost-effective
Public goods problem means under provision (everyone can benefit from the research even if you don’t fund it yourself)
From a governmental perspective living longer is actually a massive societal issue as it introduces serious fiscal challenges as you need to fund pensions etc. From an individual perspective living longer just means having to work longer to support yourself for longer. So does anyone see anti-aging as that great?
People discount the future
Having said all this, I actually agree with you that x-risk could be fairly high due to a failure of rationality. Primarily because we’ve never gone extinct so people naturally think it’s really unlikely, but x-risk is rising as we get more technologically powerful.
BUT, I agree with Will’s core point that working towards the best possible future is almost certainly more neglected than reducing x-risk, partly because it’s just so wacky. People think about good futures where we are very wealthy and have lots of time to do fun stuff, but do they think about futures where we create loads of digital minds that live maximally-flourishing lives? I doubt it.