Today in this post on X, the U.S. ‘AI Czar’ David Sacks directly attacked Humans First, an AI safety advocacy organization, by claiming that it’s nothing more than ‘censorship power play’, a shadowy campaign by Effective Altruists to turn the conservative right against the AI industry, and to block technological progress.
He quote-posted this blog by Jordan Schachtel titled ‘Built to Deceive: How the Effective Altruist Machine Infiltrated the Conservative Right on AI’.
As an AI Safety advocate, a member of Humans First, an Effective Altruist, and a political conservative, I’m angry about this misrepresentation of AI safety campaign. And I think EAs should fight back harder against senior federal officials smearing our movement.
Any suggestions on how to respond? I don’t have time this week to write a detailed rebuttal, but I’d be happy to link and promote anything that others write.
(Not a solution, but a general observation about people who engage in bashing EA.)
The “dot connectors” will always connect the dots, infer or invent nefarious motivations, and try to bucket you as they like. The problem is that you can’t neatly map EAs onto the political spectrum—yes, there are dominant trends, but the variance in views is sufficiently high that commentators have genuinely no clue where EAs belong. This makes sense because most major movements in history have been political ones, so when assessing EA, most people pull out their internal political philosophy detector and you end up with a mess like the chart below!
But EA is a moral philosophy movement, and the chain of thinking is genuinely different. Instead of thinking how to organize society and labor, EAs unanimously agree on beneficentrism and deal with questions like, “What morally matters? To what degree? Which interventions are most effective? How do you even assess what is most effective?” When you organize a movement around these set of questions, you end up with:
Some people who want to automate software engineering, some who want to pause it entirely, and others who think we should defensively accelerate progress
At least two frontier AI labs: let’s not forget OpenAI received $30 million in philanthropic money during its inception!
Some EAs who think that AI will be a big deal for {their cause area}, others who are skeptical of the whole AI bundle
Some EAs passionately dislike AI writing, some are fine with methodical use of AI in writing, and some are even more liberal about it
One particular EA who is the loudest voice combatting the data center water usage myth
(At least) one person from the EA-sphere who has large holdings in AI infrastructure
And conservative AI Safetyists like you and liberal long timeline accelerationists like me
I don’t know what the best solution for combatting EA bashing is, but spreading the idea that EA is more politically and intellectually diverse than people think should help.
This is a slow-burn solution, but the most effective support and rebuttals will come from people who aren’t EAs, but are just fair/principled, and have had enough exposure to EA to know when attacks are unfair. E.g. See Dean Ball this week. So the more surface area EAs can create with those sorts of people, the better the position EA is in. For example, I think Andy Masley’s datacenter water use posts created a lot of surface ara with such people and has been better for the EA ‘brand’ than any specific rebuttal.
(A part of this strategy involves, as a general principle, “behave with as much dignity, integrity and fairness as possible, even when others aren’t). (Admittedly my own responses usually involve gently poking fun, but I do try to stay good natured).
I think we could use a documentary series where we just go follow around orgs or individual EAs for a couple days and see how they talk, live and act. It would be pretty cheap at the very least.
AI Czar attacks EA. (Again.)
Today in this post on X, the U.S. ‘AI Czar’ David Sacks directly attacked Humans First, an AI safety advocacy organization, by claiming that it’s nothing more than ‘censorship power play’, a shadowy campaign by Effective Altruists to turn the conservative right against the AI industry, and to block technological progress.
He quote-posted this blog by Jordan Schachtel titled ‘Built to Deceive: How the Effective Altruist Machine Infiltrated the Conservative Right on AI’.
As an AI Safety advocate, a member of Humans First, an Effective Altruist, and a political conservative, I’m angry about this misrepresentation of AI safety campaign. And I think EAs should fight back harder against senior federal officials smearing our movement.
Any suggestions on how to respond? I don’t have time this week to write a detailed rebuttal, but I’d be happy to link and promote anything that others write.
(Not a solution, but a general observation about people who engage in bashing EA.)
The “dot connectors” will always connect the dots, infer or invent nefarious motivations, and try to bucket you as they like. The problem is that you can’t neatly map EAs onto the political spectrum—yes, there are dominant trends, but the variance in views is sufficiently high that commentators have genuinely no clue where EAs belong. This makes sense because most major movements in history have been political ones, so when assessing EA, most people pull out their internal political philosophy detector and you end up with a mess like the chart below!
But EA is a moral philosophy movement, and the chain of thinking is genuinely different. Instead of thinking how to organize society and labor, EAs unanimously agree on beneficentrism and deal with questions like, “What morally matters? To what degree? Which interventions are most effective? How do you even assess what is most effective?” When you organize a movement around these set of questions, you end up with:
Some people who want to automate software engineering, some who want to pause it entirely, and others who think we should defensively accelerate progress
At least two frontier AI labs: let’s not forget OpenAI received $30 million in philanthropic money during its inception!
Some EAs who think that AI will be a big deal for {their cause area}, others who are skeptical of the whole AI bundle
Some EAs passionately dislike AI writing, some are fine with methodical use of AI in writing, and some are even more liberal about it
One particular EA who is the loudest voice combatting the data center water usage myth
(At least) one person from the EA-sphere who has large holdings in AI infrastructure
And conservative AI Safetyists like you and liberal long timeline accelerationists like me
I don’t know what the best solution for combatting EA bashing is, but spreading the idea that EA is more politically and intellectually diverse than people think should help.
This is a slow-burn solution, but the most effective support and rebuttals will come from people who aren’t EAs, but are just fair/principled, and have had enough exposure to EA to know when attacks are unfair. E.g. See Dean Ball this week. So the more surface area EAs can create with those sorts of people, the better the position EA is in. For example, I think Andy Masley’s datacenter water use posts created a lot of surface ara with such people and has been better for the EA ‘brand’ than any specific rebuttal.
(A part of this strategy involves, as a general principle, “behave with as much dignity, integrity and fairness as possible, even when others aren’t). (Admittedly my own responses usually involve gently poking fun, but I do try to stay good natured).
I think we could use a documentary series where we just go follow around orgs or individual EAs for a couple days and see how they talk, live and act. It would be pretty cheap at the very least.