It’s a game of chicken, and I don’t really care which side is hitting the accelerator if I’m stuck in one of the cars. China getting uncontrolled ASI first kills me the same way that the US getting it does.
Edit to add: I would be very interested in responses instead of disagree votes. I think this should be the overwhelming consideration for anyone who cares about the future more than, say, 10 years out. If people disagree, I would be interested in understanding why.
Since you requested responses: I agree with something like: ‘conditional upon AI killing us all and then going on to do things that have zero moral (dis)value, it then matters little who was most responsible for that having happened’. But this seems like an odd framing to me:
Even if focusing solely on AI alignment, different actors have varying levels of responsibility for worsening various risk factors or contributing to various safety/security/mitigation between now and the arrival of transformative AI / ASI.
The post asked about AGI. Reaching AGI is not the same as reaching ASI, which is not the same as extinction.
Sure, but I think of, say, a 5% probability of success and a 6% probability of success as similarly dire enough not to want to pick either.
What we call AGI today, human level at everything as aminimum but running on a GPU, is what Bostrom called speed and/or collective superintelligence, if chip prices and speeds continue to change.
and 4. Sure, alignment isn’t enough, but it’s necessary, and it seems we’re not on track to make even that low bar.
You were getting disagree votes because it sounded like you were claiming certainty. I realize that you weren’t trying to do that, but that’s how people were taking it, and I find that quite understandable. Chicken as an analogy has certain death if neither player swerves, in the standard formulation. Qualifying your statement even a little would’ve gotten your point across better.
FWIW I agree with your statement as I interpret it. I do tend to think that an objective measure of misalignment risk (I place it around 50% largely based on model uncertainty on all sides) makes the question of which side is safer basically irrelevant.
Which highlights the problem with this type of miscomunnication. You were making probably by far the most important point here. It didn’t play a prominent role because it wasn’t communicated in a way the audience would understand.
You’re stating it as a fact that “it is” a game of chicken, i.e. that it’s certain or very likely that developing ASI will cause a global catastrophe because of misaligned takeover. It’s an outcome I’m worried about, but it’s far from certain, as I see it. And if it’s not certain, then it is worth considering what people would do with aligned AI.
I’m confused why people think certainty is needed to characterize this as a game of chicken! It’s certainly not needed in order for the game theoretic dynamics to apply.
I can make a decision about whether to oppose something given that there is substantial uncertainty, and I have done so.
I agree with this comment, but I interpreted your original comment as implying a much greater degree of certainty of extinction assuming ASI is developed than you might have intended. My disagree vote was meant to disagree with the implication that it’s near certain. If you think it’s not near certain it’d cause extinction or equivalent, then it does seem worth considering who might end up controlling ASI!
If it’s “only” a coinflip if it causes extinction if developed today, to be wildly optimistic, then I will again argue that talking about who should flip the coin seems bad—the correct answer in that case is no one, and we should be incredibly clear on that!
A) there is no concrete proof that ASI is actually on the near-term horizon.
B) There is no concrete proof that if “uncontrolled” ASI is made, it is certain to kill us.
C) There is no concrete proof that the US and china will be equally bad if obtaining ASI. We have limited information as to what each country will look like decades in the future.
“You can’t prove it” isn’t the type of argument I expect to see if we’re truthseeking. All of these are positions taken by various other experts, and are at least reasonable. No, I’m not certain, but I don’t need to be when others are risking my life in pursuit of short term military advantage.
Dude, I agree there’s no proof it’s going to kill us. But—what do you think is even happening? What are you waiting to see, before you take this seriously? We are absolutely on track to find out what happens to a biological species when it builds AI smarter than itself.
It’s a game of chicken, and I don’t really care which side is hitting the accelerator if I’m stuck in one of the cars. China getting uncontrolled ASI first kills me the same way that the US getting it does.
Edit to add: I would be very interested in responses instead of disagree votes. I think this should be the overwhelming consideration for anyone who cares about the future more than, say, 10 years out. If people disagree, I would be interested in understanding why.
Since you requested responses: I agree with something like: ‘conditional upon AI killing us all and then going on to do things that have zero moral (dis)value, it then matters little who was most responsible for that having happened’. But this seems like an odd framing to me:
Even if focusing solely on AI alignment, different actors have varying levels of responsibility for worsening various risk factors or contributing to various safety/security/mitigation between now and the arrival of transformative AI / ASI.
The post asked about AGI. Reaching AGI is not the same as reaching ASI, which is not the same as extinction.
It seems very possible that humanity could survive but the world could end up as severely net negative. See “The Future Might Not Be So Great”, “s-risks”, and the upcoming EA Forum debate week
In particular, I believe AI alignment is not enough to ensure positive futures. See for example risks of stable totalitarianism, risks from malevolent actors, risks from ideological fanaticism. We can think of ‘human misalignment’ or misuse of AI.
To respond to you points in order:
Sure, but I think of, say, a 5% probability of success and a 6% probability of success as similarly dire enough not to want to pick either.
What we call AGI today, human level at everything as aminimum but running on a GPU, is what Bostrom called speed and/or collective superintelligence, if chip prices and speeds continue to change.
and 4. Sure, alignment isn’t enough, but it’s necessary, and it seems we’re not on track to make even that low bar.
You were getting disagree votes because it sounded like you were claiming certainty. I realize that you weren’t trying to do that, but that’s how people were taking it, and I find that quite understandable. Chicken as an analogy has certain death if neither player swerves, in the standard formulation. Qualifying your statement even a little would’ve gotten your point across better.
FWIW I agree with your statement as I interpret it. I do tend to think that an objective measure of misalignment risk (I place it around 50% largely based on model uncertainty on all sides) makes the question of which side is safer basically irrelevant.
Which highlights the problem with this type of miscomunnication. You were making probably by far the most important point here. It didn’t play a prominent role because it wasn’t communicated in a way the audience would understand.
You’re stating it as a fact that “it is” a game of chicken, i.e. that it’s certain or very likely that developing ASI will cause a global catastrophe because of misaligned takeover. It’s an outcome I’m worried about, but it’s far from certain, as I see it. And if it’s not certain, then it is worth considering what people would do with aligned AI.
I’m confused why people think certainty is needed to characterize this as a game of chicken! It’s certainly not needed in order for the game theoretic dynamics to apply.
I can make a decision about whether to oppose something given that there is substantial uncertainty, and I have done so.
I agree with this comment, but I interpreted your original comment as implying a much greater degree of certainty of extinction assuming ASI is developed than you might have intended. My disagree vote was meant to disagree with the implication that it’s near certain. If you think it’s not near certain it’d cause extinction or equivalent, then it does seem worth considering who might end up controlling ASI!
If it’s “only” a coinflip if it causes extinction if developed today, to be wildly optimistic, then I will again argue that talking about who should flip the coin seems bad—the correct answer in that case is no one, and we should be incredibly clear on that!
Agree coin flip is unacceptable! Or even much less than coin flip is still unacceptable.
A) there is no concrete proof that ASI is actually on the near-term horizon.
B) There is no concrete proof that if “uncontrolled” ASI is made, it is certain to kill us.
C) There is no concrete proof that the US and china will be equally bad if obtaining ASI. We have limited information as to what each country will look like decades in the future.
“You can’t prove it” isn’t the type of argument I expect to see if we’re truthseeking. All of these are positions taken by various other experts, and are at least reasonable. No, I’m not certain, but I don’t need to be when others are risking my life in pursuit of short term military advantage.
Dude, I agree there’s no proof it’s going to kill us. But—what do you think is even happening? What are you waiting to see, before you take this seriously? We are absolutely on track to find out what happens to a biological species when it builds AI smarter than itself.