Scott—thanks for the thoughtful reply; much appreciated.
I think a key strategic difference here is that I’m willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago.
Moral stigmatization is a powerful but blunt instrument. It doesn’t do nuance well. It isn’t ‘epistemically responsible’ in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most people aren’t comfortable stigmatizing people who ‘seem like us’—e.g. AI devs who share with most EAs traits such as high intelligence, high openness, technophilia, liberal values, and ‘good intentions’, broadly construed.
But, I don’t see any practical way of slowing AI capabilities development without increasing the moral stigmatization of the AI industry. And Sam Altman has rendered himself highly, highly stigmatizable. So, IMHO, we might as well capitalize on that, to help save humanity from his hubris, and the hubris of other AI leaders.
(And, as you point out, formal regulation and gov’t policy also come with their own weaknesses, vested interests, and bad actors. So, although EAs tend to act as if formal gov’t regulation is somehow morally superior to the stigmatization strategy, it’s not at all clear to me that it really is.)
I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy—actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry.
Manuel—thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.
But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won’t buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens.
I don’t really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry.
Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.
Scott—thanks for the thoughtful reply; much appreciated.
I think a key strategic difference here is that I’m willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago.
Moral stigmatization is a powerful but blunt instrument. It doesn’t do nuance well. It isn’t ‘epistemically responsible’ in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most people aren’t comfortable stigmatizing people who ‘seem like us’—e.g. AI devs who share with most EAs traits such as high intelligence, high openness, technophilia, liberal values, and ‘good intentions’, broadly construed.
But, I don’t see any practical way of slowing AI capabilities development without increasing the moral stigmatization of the AI industry. And Sam Altman has rendered himself highly, highly stigmatizable. So, IMHO, we might as well capitalize on that, to help save humanity from his hubris, and the hubris of other AI leaders.
(And, as you point out, formal regulation and gov’t policy also come with their own weaknesses, vested interests, and bad actors. So, although EAs tend to act as if formal gov’t regulation is somehow morally superior to the stigmatization strategy, it’s not at all clear to me that it really is.)
I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy—actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry.
Manuel—thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.
But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won’t buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens.
I don’t really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry.
Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.