Agreed completely, and I should also add that this is the exact reason I’m against all AGI development in the near term (at least 50 years, likely longer). Right now, our society is still deeply confused and irrational, as you put it, and our currently most powerful social systems exacerbate this. I think our primary purpose, right now, should be social and moral development, not new technology. Once we figure out global peace and governance, economics not based on benefit to a small elite at the expense of literally everyone else, and food systems without animal death and suffering, then we can start tackling the problem of AGI / ASI. We need a global pause on AI development that lasts decades, excluding strictly narrow AI with no AGI potential (think Stockfish or Alexa). And this pause must be strictly enforced by governments worldwide, otherwise it is likely to increase x-risk by driving AI research underground, affiliated with criminal groups and rogue actors, etc.
Unfortunately, that means that if you are reading this (and are over the age of 10 or so), you will live, work, age, and die like our human ancestors have for millennia. If you’re currently in your 30s, as I am, your grandchildren may see ASI utopia, but you will work, struggle, and decline just like your own parents and grandparents. This is a terrible thing to accept, but if you are truly rational (even if you aren’t an Effective Altruist) you must accept it. The threat of creating a misaligned superintelligence is just too great to press forward now.
With all this said, I think the chance of an effective AI ban actually coming to fruition are effectively zero, and an ineffective ban is likely to actually increase x-risk by driving research and development underground. So we’re just going to have to live with a high level of x-risk (hopefully we will keep LIVING with it) for the foreseeable future.
Agreed completely, and I should also add that this is the exact reason I’m against all AGI development in the near term (at least 50 years, likely longer). Right now, our society is still deeply confused and irrational, as you put it, and our currently most powerful social systems exacerbate this. I think our primary purpose, right now, should be social and moral development, not new technology. Once we figure out global peace and governance, economics not based on benefit to a small elite at the expense of literally everyone else, and food systems without animal death and suffering, then we can start tackling the problem of AGI / ASI. We need a global pause on AI development that lasts decades, excluding strictly narrow AI with no AGI potential (think Stockfish or Alexa). And this pause must be strictly enforced by governments worldwide, otherwise it is likely to increase x-risk by driving AI research underground, affiliated with criminal groups and rogue actors, etc.
Unfortunately, that means that if you are reading this (and are over the age of 10 or so), you will live, work, age, and die like our human ancestors have for millennia. If you’re currently in your 30s, as I am, your grandchildren may see ASI utopia, but you will work, struggle, and decline just like your own parents and grandparents. This is a terrible thing to accept, but if you are truly rational (even if you aren’t an Effective Altruist) you must accept it. The threat of creating a misaligned superintelligence is just too great to press forward now.
With all this said, I think the chance of an effective AI ban actually coming to fruition are effectively zero, and an ineffective ban is likely to actually increase x-risk by driving research and development underground. So we’re just going to have to live with a high level of x-risk (hopefully we will keep LIVING with it) for the foreseeable future.