I think this is more over-learning and institutional scar tissue from FTX. The world isn’t divided into Bad Actors and Non-Bad-Actors such that the Bad Actors are toxic and will destroy everything they touch.
There’s increasing evidence that Sam Altman is a cut-throat businessman who engages in shady practices. This also describes, for example, Bill Gates and Elon Musk, both of whom also have other good qualities. I wouldn’t trust either of them to single-handedly determine the fate of the world, but they both seem like people who can be worked with in the normal paradigm of different interests making deals with each other while appreciating a risk of backstabbing.
I think “Sam Altman does shady business practices, therefore all AI companies are bad actors and alignment is impossible” is a wild leap. We’re still in the early (maybe early middle) stages of whatever is going to happen. I don’t think this is the time to pick winners and put all eggs in a single strategy. Besides, what’s the alternative? Policy? Do you think politicians aren’t shady cut-throat bad actors? That the other activists we would have to work alongside aren’t? Every strategy involves shifting semi-coalitions with shady cut-throat bad actors of some sort of another, you just try to do a good job navigating them and keep your own integrity intact.
If your point is “don’t trust Sam Altman absolutely to pursue our interests above his own”, point taken. But there are vast gulfs between “don’t trust him absolutely” and “abandon all strategies that come into contact with him in any way”. I think the middle ground here is to treat him approximately how I think most people here treat Elon Musk. He’s a brilliant but cut-throat businessman who does lots of shady practices. He seems to genuinely have some kind of positive vision for the world, or want for PR reasons to seem like he has a positive vision for the world, or have a mental makeup incapable of distinguishing those two things. He’s willing to throw the AI safety community the occasional bone when it doesn’t interfere with business too much. We don’t turn ourselves into the We Hate Elon Musk movement or avoid ever working with tech companies because they contain people like Elon Musk. We distance ourselves from him enough that his PR problems aren’t our PR problems (already done in Sam’s case; thanks to the board the average person probably thinks of us as weird anti-Sam-Altman fanatics) describe his positive and negative qualities honestly if asked, try to vaguely get him to take whatever good advice we have that doesn’t conflict with his business too much, and continue having a diverse portfolio of strategies at any given time. Or, I mean, part of the shifting semi-coalitions is that if some great opportunity to get rid of him comes, we compare him to the alternatives and maybe take it. But we’re so far away from having that alternative that pining after it is a distraction from the real world.
But we’re so far away from having that alternative that pining after it is a distraction from the real world.
For one thing, we could try to make OpenAI/SamA toxic to invest in or do business with, and hope that other AI labs either already have better governance / safety cultures, or are greatly incentivized to improve on those fronts. If we (EA as well as the public in general) give him a pass (treat him as a typical/acceptable businessman), what lesson does that convey to others?
Yeah, I also don’t think we are that far away. OpenAI seems like it’s just a few more scandals-similar-to-the-past-week’s away from implosion. Or at least, Sam’s position as CEO seems to be on shaky ground again, and this time he won’t have unanimous support from the rank-and-file employees.
Scott—thanks for the thoughtful reply; much appreciated.
I think a key strategic difference here is that I’m willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago.
Moral stigmatization is a powerful but blunt instrument. It doesn’t do nuance well. It isn’t ‘epistemically responsible’ in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most people aren’t comfortable stigmatizing people who ‘seem like us’—e.g. AI devs who share with most EAs traits such as high intelligence, high openness, technophilia, liberal values, and ‘good intentions’, broadly construed.
But, I don’t see any practical way of slowing AI capabilities development without increasing the moral stigmatization of the AI industry. And Sam Altman has rendered himself highly, highly stigmatizable. So, IMHO, we might as well capitalize on that, to help save humanity from his hubris, and the hubris of other AI leaders.
(And, as you point out, formal regulation and gov’t policy also come with their own weaknesses, vested interests, and bad actors. So, although EAs tend to act as if formal gov’t regulation is somehow morally superior to the stigmatization strategy, it’s not at all clear to me that it really is.)
I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy—actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry.
Manuel—thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.
But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won’t buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens.
I don’t really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry.
Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.
I think there’s ample public evidence that Sam Altman is substantially less trustworthy than average for tech CEOs. Hopefully more private evidence would come out later that mostly exonerates him and puts him closer in character to “typical tech CEO”, but I don’t think that will happen. My guess right now is that the private evidence that will slowly filter out would make him look worse than what the public currently thinks, not better.
That said, I agree that “abandon all strategies that come into contact with him in any way” is probably unrealistic. Churchill worked with Stalin, there was a post-Cuban missile crisis hotline between the US and USSR, etc.
I also agree that OP was vastly overreaching when he said the public will identify EA with Sam Altman. I think that’s pretty unlikely as of the board exodus, if not earlier.
We’re still in the early (maybe early middle) stages of whatever is going to happen. I don’t think this is the time to pick winners and put all eggs in a single strategy. Besides, what’s the alternative? Policy? Do you think politicians aren’t shady cut-throat bad actors? That the other activists we would have to work alongside aren’t? Every strategy involves shifting semi-coalitions with shady cut-throat bad actors of some sort of another, you just try to do a good job navigating them and keep your own integrity intact.
I sort of agree, but I also think policy has more natural checks-and-balances. Part of the hard work of doing good as a society is that you try to shape institutions and incentives to create behavior, rather than rely primarily on heroic acts of personal integrity. My own guess is that thinking of “AI company” as an institution and set of incentives would make it clear that it’s worse for safety than other plausible structures, though I understand that some within EA disagree.
Linch—I agree with your first and last paragraphs.
I have my own doubts about our political institutions, political leaders, and regulators. They have many and obvious flaws. But they’re one of the few tools we have to hold corporate power accountable to the general public. We might as well use them, as best we can.
I think this is more over-learning and institutional scar tissue from FTX. The world isn’t divided into Bad Actors and Non-Bad-Actors such that the Bad Actors are toxic and will destroy everything they touch.
There’s increasing evidence that Sam Altman is a cut-throat businessman who engages in shady practices. This also describes, for example, Bill Gates and Elon Musk, both of whom also have other good qualities. I wouldn’t trust either of them to single-handedly determine the fate of the world, but they both seem like people who can be worked with in the normal paradigm of different interests making deals with each other while appreciating a risk of backstabbing.
I think “Sam Altman does shady business practices, therefore all AI companies are bad actors and alignment is impossible” is a wild leap. We’re still in the early (maybe early middle) stages of whatever is going to happen. I don’t think this is the time to pick winners and put all eggs in a single strategy. Besides, what’s the alternative? Policy? Do you think politicians aren’t shady cut-throat bad actors? That the other activists we would have to work alongside aren’t? Every strategy involves shifting semi-coalitions with shady cut-throat bad actors of some sort of another, you just try to do a good job navigating them and keep your own integrity intact.
If your point is “don’t trust Sam Altman absolutely to pursue our interests above his own”, point taken. But there are vast gulfs between “don’t trust him absolutely” and “abandon all strategies that come into contact with him in any way”. I think the middle ground here is to treat him approximately how I think most people here treat Elon Musk. He’s a brilliant but cut-throat businessman who does lots of shady practices. He seems to genuinely have some kind of positive vision for the world, or want for PR reasons to seem like he has a positive vision for the world, or have a mental makeup incapable of distinguishing those two things. He’s willing to throw the AI safety community the occasional bone when it doesn’t interfere with business too much. We don’t turn ourselves into the We Hate Elon Musk movement or avoid ever working with tech companies because they contain people like Elon Musk. We distance ourselves from him enough that his PR problems aren’t our PR problems (already done in Sam’s case; thanks to the board the average person probably thinks of us as weird anti-Sam-Altman fanatics) describe his positive and negative qualities honestly if asked, try to vaguely get him to take whatever good advice we have that doesn’t conflict with his business too much, and continue having a diverse portfolio of strategies at any given time. Or, I mean, part of the shifting semi-coalitions is that if some great opportunity to get rid of him comes, we compare him to the alternatives and maybe take it. But we’re so far away from having that alternative that pining after it is a distraction from the real world.
For one thing, we could try to make OpenAI/SamA toxic to invest in or do business with, and hope that other AI labs either already have better governance / safety cultures, or are greatly incentivized to improve on those fronts. If we (EA as well as the public in general) give him a pass (treat him as a typical/acceptable businessman), what lesson does that convey to others?
Yeah, I also don’t think we are that far away. OpenAI seems like it’s just a few more scandals-similar-to-the-past-week’s away from implosion. Or at least, Sam’s position as CEO seems to be on shaky ground again, and this time he won’t have unanimous support from the rank-and-file employees.
Scott—thanks for the thoughtful reply; much appreciated.
I think a key strategic difference here is that I’m willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago.
Moral stigmatization is a powerful but blunt instrument. It doesn’t do nuance well. It isn’t ‘epistemically responsible’ in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most people aren’t comfortable stigmatizing people who ‘seem like us’—e.g. AI devs who share with most EAs traits such as high intelligence, high openness, technophilia, liberal values, and ‘good intentions’, broadly construed.
But, I don’t see any practical way of slowing AI capabilities development without increasing the moral stigmatization of the AI industry. And Sam Altman has rendered himself highly, highly stigmatizable. So, IMHO, we might as well capitalize on that, to help save humanity from his hubris, and the hubris of other AI leaders.
(And, as you point out, formal regulation and gov’t policy also come with their own weaknesses, vested interests, and bad actors. So, although EAs tend to act as if formal gov’t regulation is somehow morally superior to the stigmatization strategy, it’s not at all clear to me that it really is.)
I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy—actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry.
Manuel—thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.
But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won’t buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens.
I don’t really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry.
Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.
I think there’s ample public evidence that Sam Altman is substantially less trustworthy than average for tech CEOs. Hopefully more private evidence would come out later that mostly exonerates him and puts him closer in character to “typical tech CEO”, but I don’t think that will happen. My guess right now is that the private evidence that will slowly filter out would make him look worse than what the public currently thinks, not better.
That said, I agree that “abandon all strategies that come into contact with him in any way” is probably unrealistic. Churchill worked with Stalin, there was a post-Cuban missile crisis hotline between the US and USSR, etc.
I also agree that OP was vastly overreaching when he said the public will identify EA with Sam Altman. I think that’s pretty unlikely as of the board exodus, if not earlier.
I sort of agree, but I also think policy has more natural checks-and-balances. Part of the hard work of doing good as a society is that you try to shape institutions and incentives to create behavior, rather than rely primarily on heroic acts of personal integrity. My own guess is that thinking of “AI company” as an institution and set of incentives would make it clear that it’s worse for safety than other plausible structures, though I understand that some within EA disagree.
Linch—I agree with your first and last paragraphs.
I have my own doubts about our political institutions, political leaders, and regulators. They have many and obvious flaws. But they’re one of the few tools we have to hold corporate power accountable to the general public. We might as well use them, as best we can.