I know you wrote this five years ago, but I think this is the opposite of what effective altruism needs now. The worrying tendency I see in effective altruism nowadays is for people to circle the wagons around criticism.
The current EA community, or at least large parts of it, is somewhat radicalized around certain views, particularly around near-term AGI forecasts, AGI safety/​alignment, social justice, scientific racism, sexual harassment, and a quirky, home-grown variety of Bayesianism-utilitarianism.
One of the root causes of this radicalization seems to be ideological insularity or the effects of being in a filter bubble/​echo chamber. I’d recommend changes to promote more exposure to different viewpoints, and more serious consideration of them.
So, rather than a bulldog (or another one, or more of them), maybe what EA needs is more public debates or dialogues between people with different viewpoints on the topics where EA has become more radicalized over the last five years or so.
I know you wrote this five years ago, but I think this is the opposite of what effective altruism needs now. The worrying tendency I see in effective altruism nowadays is for people to circle the wagons around criticism.
The current EA community, or at least large parts of it, is somewhat radicalized around certain views, particularly around near-term AGI forecasts, AGI safety/​alignment, social justice, scientific racism, sexual harassment, and a quirky, home-grown variety of Bayesianism-utilitarianism.
One of the root causes of this radicalization seems to be ideological insularity or the effects of being in a filter bubble/​echo chamber. I’d recommend changes to promote more exposure to different viewpoints, and more serious consideration of them.
So, rather than a bulldog (or another one, or more of them), maybe what EA needs is more public debates or dialogues between people with different viewpoints on the topics where EA has become more radicalized over the last five years or so.