(Not a solution, but a general observation about people who engage in bashing EA.)
The “dot connectors” will always connect the dots, infer or invent nefarious motivations, and try to bucket you as they like. The problem is that you can’t neatly map EAs onto the political spectrum—yes, there are dominant trends, but the variance in views is sufficiently high that commentators have genuinely no clue where EAs belong. This makes sense because most major movements in history have been political ones, so when assessing EA, most people pull out their internal political philosophy detector and you end up with a mess like the chart below!
But EA is a moral philosophy movement, and the chain of thinking is genuinely different. Instead of thinking how to organize society and labor, EAs unanimously agree on beneficentrism and deal with questions like, “What morally matters? To what degree? Which interventions are most effective? How do you even assess what is most effective?” When you organize a movement around these set of questions, you end up with:
Some people who want to automate software engineering, some who want to pause it entirely, and others who think we should defensively accelerate progress
At least two frontier AI labs: let’s not forget OpenAI received $30 million in philanthropic money during its inception!
Some EAs who think that AI will be a big deal for {their cause area}, others who are skeptical of the whole AI bundle
Some EAs passionately dislike AI writing, some are fine with methodical use of AI in writing, and some are even more liberal about it
One particular EA who is the loudest voice combatting the data center water usage myth
(At least) one person from the EA-sphere who has large holdings in AI infrastructure
And conservative AI Safetyists like you and liberal long timeline accelerationists like me
I don’t know what the best solution for combatting EA bashing is, but spreading the idea that EA is more politically and intellectually diverse than people think should help.
(Not a solution, but a general observation about people who engage in bashing EA.)
The “dot connectors” will always connect the dots, infer or invent nefarious motivations, and try to bucket you as they like. The problem is that you can’t neatly map EAs onto the political spectrum—yes, there are dominant trends, but the variance in views is sufficiently high that commentators have genuinely no clue where EAs belong. This makes sense because most major movements in history have been political ones, so when assessing EA, most people pull out their internal political philosophy detector and you end up with a mess like the chart below!
But EA is a moral philosophy movement, and the chain of thinking is genuinely different. Instead of thinking how to organize society and labor, EAs unanimously agree on beneficentrism and deal with questions like, “What morally matters? To what degree? Which interventions are most effective? How do you even assess what is most effective?” When you organize a movement around these set of questions, you end up with:
Some people who want to automate software engineering, some who want to pause it entirely, and others who think we should defensively accelerate progress
At least two frontier AI labs: let’s not forget OpenAI received $30 million in philanthropic money during its inception!
Some EAs who think that AI will be a big deal for {their cause area}, others who are skeptical of the whole AI bundle
Some EAs passionately dislike AI writing, some are fine with methodical use of AI in writing, and some are even more liberal about it
One particular EA who is the loudest voice combatting the data center water usage myth
(At least) one person from the EA-sphere who has large holdings in AI infrastructure
And conservative AI Safetyists like you and liberal long timeline accelerationists like me
I don’t know what the best solution for combatting EA bashing is, but spreading the idea that EA is more politically and intellectually diverse than people think should help.