The Paradox of Ineffective Egoism in Effective Altruism

An Anecdote

I had to wait at least five days to make this post, after waiting five days to make my previous one. Why? Because I have “negative” karma. Mostly from a single comment I made on another post — I pointed out the irony of debating “AI rights” when basic human rights are still contested in this day and age. I guess people didn’t like that. But no one bothered to explain why.

So this has been my introductory experience on the EA Forum: silence, downvotes, and the lingering impression that dissent isn’t welcome.

What I expected to be a platform for rigorous intellectual debate about doing the most good has instead proven to be an echo chamber of unoriginality, one that suppresses outside-the-box thinking. And that, I think, points to a larger problem.

From Observation

As someone interested in AI safety, I’d heard the EA Forum was the place for serious discourse. So I browsed before posting.

After going through several of the most upvoted posts, I started to notice a pattern — sameness in tone, sameness in structure, even sameness in thought. Ideas endlessly repackaged, reframed, and recycled. A sort of intellectual monoculture.

And this sort of culture, if left unexamined, risks reproducing the same narrow, ineffective solutions to the very problems it purports to want to solve.

From Experience

Eventually, I posted my own argument: that unsafe AI is already here because we are unsafe humans. The training data mirrors our history, our culture — all steeped in domination and hierarchy — which becomes an inherent part of the models.

Within hours, it was downvoted. No comments. No engagement. No critique. Just silent rejection.

Why? Maybe because I have a completely baseless argument. Or more likely — I proposed a view that doesn’t fit within the conventional EA framing of “AI safety,” because genuine dissent is simply unwelcome. Either way, it reflects a kind of intellectual self-censorship where ideas that don’t conform to the dominant worldview get brushed under the rug.

Insight or Rant?

So what does it say when a movement that aims to “do the most good” reflexively suppresses ideas it doesn’t approve of?

Maybe this post will answer that — will it be ignored, downvoted, or discussed?

Either way, that response will tell us more about the state of the philosophy itself than about the validity of the argument.

Because if effective altruism can’t tolerate challenge or discomfort, then it’s not really effective, and it’s certainly not altruistic.