This is a dubious line of argumentation because it evaluates EA as a closed system and ignores the potential costs/benefits to other actors. EAs may get warm fuzzies from ignoring political discussions, but that doesn’t make it the most effective way to improve the world. It is totally plausible to me that the most effective way to improve the world is through politics, given how much power and decisionmaking it involves.
Rejecting a way to do good because it might taint your lily-white epistemics may be better epistemics but that doesn’t make it EA.
Basically, what I mean here is that EA/LessWrong has wisely not fallen for shiny political discussions and are able to stay focused on the original goal, for EA that is doing the most good, and for LessWrong it’s rationality. And there is a massive lesson for people that try to do good in the world from Robin Hanson: Pull sideways, rather than forwards or backwards in politics.
EA’s comparative advantage is in noticing and effectively dealing with important, less-discussed causes like AI, global poverty, bio-risk, x-risk more generally, and more. Another advantage EA has is that it roughly tracks the truth epistemically (within some error bars), and this matters because while you do need to, at the end, be able to put your vision into practice, you also need to be able to notice how the world actually looks like in order to do any goal well, like do-gooding. Intelligence and rationality helps every cause. And without significant changes to how humans work, politics is where we are at our most irrational, and EA has rightly found political power to be low tractability, medium-high importance, and low neglectedness. Political discussion is much much lower importance than political power. It’s a perfectly anti-effective cause area.
As for my ranking of causes, I’d say the top 5 causes of impact are:
This is a dubious line of argumentation because it evaluates EA as a closed system and ignores the potential costs/benefits to other actors. EAs may get warm fuzzies from ignoring political discussions, but that doesn’t make it the most effective way to improve the world. It is totally plausible to me that the most effective way to improve the world is through politics, given how much power and decisionmaking it involves.
Rejecting a way to do good because it might taint your lily-white epistemics may be better epistemics but that doesn’t make it EA.
Basically, what I mean here is that EA/LessWrong has wisely not fallen for shiny political discussions and are able to stay focused on the original goal, for EA that is doing the most good, and for LessWrong it’s rationality. And there is a massive lesson for people that try to do good in the world from Robin Hanson: Pull sideways, rather than forwards or backwards in politics.
EA’s comparative advantage is in noticing and effectively dealing with important, less-discussed causes like AI, global poverty, bio-risk, x-risk more generally, and more. Another advantage EA has is that it roughly tracks the truth epistemically (within some error bars), and this matters because while you do need to, at the end, be able to put your vision into practice, you also need to be able to notice how the world actually looks like in order to do any goal well, like do-gooding. Intelligence and rationality helps every cause. And without significant changes to how humans work, politics is where we are at our most irrational, and EA has rightly found political power to be low tractability, medium-high importance, and low neglectedness. Political discussion is much much lower importance than political power. It’s a perfectly anti-effective cause area.
As for my ranking of causes, I’d say the top 5 causes of impact are:
AI Alignment
X-risk/Longtermism
Global Health
Global Poverty
Cause X like inequality.