An implicit problem with this sort of analysis is that it assumes the critiques are wrong, and that the current views of Effective Altruism are correct.
For instance, if we assume that systemic change towards anti-capitalist ideals actually is correct, or that taking refugees does actually have long run bad effects on culture, then the criticism of these views and the pressure on the community from political groups to adopt these views is actually a good thing, and provides a net-positive benefit for EA in the long term by providing incentives to adopt the correct views.
My understanding of how EA typically responds to anti-capitalist critiques of EA:
EAs are very split on capitalism, but a significant minority aren’t fans of it, and the majority think (very) significant reforms/regulations of the free market in some form(s) are justified.
The biggest difference on economics between EA and left-wing political movements is EA sees the market liberalization worldwide as a or the main source of increasing quality of life and material standard of living, and an unprecedented decrease in absolute global poverty in human history, in the last several decades. So EAs are likelier to have confidence in free(r) market principles as fundamentally good than most other left-leaning crowds.
Lots of EAs see their participation in EA as the most good they can do with their private/personal efforts, and often they’re quite active in politics, often left-wing politics, as part of the good they do as their public/political efforts. So, while effective giving/altruism is the most good one can do with some resources, like one’s money, other resources, like one’s time, can be put towards efforts aimed at systemic change. Whenever I’ve seen this pointed out, the distinction has mysteriously always been lost on anti-capitalist critics of EA. If there is a more important and different point they’re trying to make, I’m missing it.
A lot of EAs make the case that the kind of systemic change they are pursuing is what they think is best. This includes typical EA efforts, like donating to Givewell-recommended charities. The argument is these interventions are based on robust empirical evidence, and are demonstrably so cost-effective, such that they improve the well-being of people in undeveloped or developing countries, and their subsequent ability to autonomously pursue systemic change in their own societies. There are also a lot of EAs focused on farm animal welfare they believe is the most radically important form of systemic change they can focus on. As far as I’m aware, there are no existing significant or prominent public responses to these arguments from a left-wing perspective. Any such sources would be appreciated.
A lot of anti-capitalist criticism of EA is how it approaches the eradication of extreme global poverty. In addition to not addressing EA’s arguments for how their current efforts are aiming at affecting systemic change in the world’s poorer/poorest countries, anti-capitalist critics haven’t offered up much in the way of concrete, fleshed-out, evidence-based approaches to systemic change that would motivate EA to adopt them.
Anti-capitalist critics are much likelier than EA to see the redistribution of accumulated wealth through private philanthropy as having been accumulated unjustly and/or through exploitative means. Further, they’re likelier to see relative wealth inequality within a society as a fundamentally more important problem, and thus see directly redressing it fundamentally higher priority, than most of the EA community. Because of these different background assumptions, they’re likelier to perceive EA’s typical approaches to doing the most good as insufficiently supportive of democracy and egalitarianism. As a social movement, EA is much more like a voluntary community of people who contribute resources privately available to them, than it is a collective political effort. A lot of EAs are active in political activity aimed at systemic change, publicly do so as part and parcel with their EA motivations, and are not only willing but actively encourage public organization and coordination of these efforts among EAs and other advocates/activists. That anti-capitalist critics haven’t responded to these points seems to hinge on how they haven’t validated the distinction between use of personal/private resources, and public/political resources.
There isn’t much more EA can do to respond to anti-capitalist critics until anti-capitalist critics broach these subjects. The ball is in their court.
I was trying to figure out why I dislike this post so much, and I think this is why—the assumption that people in EA are correct and everyone else is incorrect, combined with a lack of depth when explaining why certain things topics are criticized, and missing several important critiques. (Normally I don’t mind incomplete posts, but I think the tone combined with the list not being very good really bothered me.)
I just assume that EAs are correct about the EA things that we are doing. Of course that is a rational assumption to make. Otherwise you are just throwing yourself into a pit of endless self-doubt. It does not need to be argued that EAs know best about EA, just as it does not need to be argued that climatologists know best about the climate, psychologists know best about psychology and so on.
I think this is only true with a very narrow conception of what the “EA things that we are doing” are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
That’s all I believe constitutes “EA things” in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on. If the global research community on poverty interventions came to the consensus “actually we think bednets are bad now” then EA orgs would need to listen to that and change course.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world
Yes, and these things are explicitly under attack from political actors.
Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on
When EAs are not the experts, EAs pay attention to the relevant experts.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts
This is not about whether we should be “open to feedback and input”. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, “open to feedback and input”, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.
An implicit problem with this sort of analysis is that it assumes the critiques are wrong,
We know them to be wrong in basic logical terms as attacks against EA—none of these things require that EA itself change or die, just cause areas or other ideas within EA. This point has been made repeatedly to the point of consensus.
For instance, if we assume that systemic change towards anti-capitalist ideals actually is correct, or that taking refugees does actually have long run bad effects on culture, then the criticism of these views and the pressure on the community from political groups to adopt these views is actually a good thing, and provides a net-positive benefit for EA in the long term by providing incentives to adopt the correct views
You missed the point of the post. I’m making no judgment on whether e.g. anticapitalism or refugees are good or bad. If you do that, then you’re already playing the game of making sweeping judgments about society writ large, which I’m not doing. I’m simply analyzing the direct impact on EA capital.
Internal debate within the EA community is far better at reaching truthful conclusions than whatever this sort of external pressure can accomplish. Empirically, it has not been the case that such external pressure has yielded benefits for EAs’ understanding of the world.
Internal debate within the EA community is far better at reaching truthful conclusions than whatever this sort of external pressure can accomplish. Empirically, it has not been the case that such external pressure has yielded benefits for EAs’ understanding of the world.
It can be the case that external pressure is helpful in shaping directions EVEN if EA has to reach conclusions internally. I would put forward that this pressure has been helpful to EA already in reaching conclusions and finding new cause areas, and will continue to be helpful to EA in the future.
I haven’t seen any examples of cause areas or conclusions that were discovered because of political antipathy towards EA. The limiting factor is robust evidence and analysis of cause areas.
I haven’t seen any examples of cause areas or conclusions that were discovered because of political antipathy towards EA.
Veganism is probably a good example here. Institutional decisionmaking might be another. I don’t think that political antipathy is the right way to view this, but rather just the general political climate shaping the thinking of EAs. Political antipathy is a consequence of the general system that produces both positive effects on EA thought, and political antipathy towards certain aspects of EA.
Who has complained that EA is bad because it ignored animals? EAs pursued animal issues on their own volition. Peter Singer has been the major animal rights philosopher in history. Animal interests are not even part of the general political climate.
Institutional decisionmaking might be another.
Looking at 80k Hours’ writeup on institutional decision making, I see nothing with notable relevance to people’s attacks on EA. EAs have been attacked for not wanting to overthrow capitalism, not wanting to reform international monetary/finance/trade institutions along the lines of global justice, and funding foreign aid that acts as a crutch for governments in the developing world. None of these things have a connection to better institutional decision making other than the mere fact that they pertain to the government’s structure and decisions (which is broad enough to be pretty meaningless). 80k Hours is looking at techniques on forecasting and judgment, drawing heavily upon psychology and decision theory. They are talking about things like prediction markets and forecasting that have been popular among EAs for a long time. There are no citations and no inspirations from any criticisms.
The general political climate does not deal with forecasting and prediction markets. The last time it did, prediction markets were derailed because the general political climate created opposition (the Policy Analysis Market in the Bush era).
It’s possible I’m wrong. I find it unlikely that veganism wasn’t influenced by existing political arguments for veganism. I find it unlikely that a focus on institutional decision making wasn’t influenced by existing political zeitgist around the problems with democracy and capitalism. I find it unlikely that the global poverty focus wasn’t influenced by the existing political zeitgeist around inequality.
All this stuff is in the water supply, the arguments and positions have been refined by different political parties moral intuitions and battle with the opposition. This causes problems when there’s opposition to EA values, sure, but it also provides the backdrop from which EAs are reasoning from.
It may be that EAs have somehow thrown off all of the existing arguments, cultural milleu, and basic stances and assumptions that have been honed for the past few generations, but that to me represents more of a failure of EA if true than anything else.
I find it unlikely that veganism wasn’t influenced by existing political arguments for veganism.
I find it obvious. What political arguments for veganism even exist? That it causes climate change? Yet EAs give more attention to the suffering impacts than to the climate impacts.
I find it unlikely that a focus on institutional decision making wasn’t influenced by existing political zeitgist around the problems with democracy and capitalism.
The mere idea that “there are problems with democracy and capitalism” is relatively widespread, not unique to leftism, and therefore doesn’t detract from my point that relatively moderate positions (which frequently acknowledge problems with democracy and capitalism) have better impacts on EA than extreme ones. The leftist zeitgeist is notably different and even contradictory with what EAs have put forward, as noted above.
I find it unlikely that the global poverty focus wasn’t influenced by the existing political zeitgeist around inequality.
People have focused on poverty as a target of charity for millennia, and people who worry about inequality (as opposed to worrying about poverty) are more stubborn towards EA ideas and demands.
it also provides the backdrop from which EAs are reasoning from.
There is an opportunity cost in not having a better backdrop. Even in a backdrop of political apathy, there would not be less information and less ideas (broadly construed) in the public sphere, just different ones and presented differently.
An implicit problem with this sort of analysis is that it assumes the critiques are wrong, and that the current views of Effective Altruism are correct.
For instance, if we assume that systemic change towards anti-capitalist ideals actually is correct, or that taking refugees does actually have long run bad effects on culture, then the criticism of these views and the pressure on the community from political groups to adopt these views is actually a good thing, and provides a net-positive benefit for EA in the long term by providing incentives to adopt the correct views.
My understanding of how EA typically responds to anti-capitalist critiques of EA:
EAs are very split on capitalism, but a significant minority aren’t fans of it, and the majority think (very) significant reforms/regulations of the free market in some form(s) are justified.
The biggest difference on economics between EA and left-wing political movements is EA sees the market liberalization worldwide as a or the main source of increasing quality of life and material standard of living, and an unprecedented decrease in absolute global poverty in human history, in the last several decades. So EAs are likelier to have confidence in free(r) market principles as fundamentally good than most other left-leaning crowds.
Lots of EAs see their participation in EA as the most good they can do with their private/personal efforts, and often they’re quite active in politics, often left-wing politics, as part of the good they do as their public/political efforts. So, while effective giving/altruism is the most good one can do with some resources, like one’s money, other resources, like one’s time, can be put towards efforts aimed at systemic change. Whenever I’ve seen this pointed out, the distinction has mysteriously always been lost on anti-capitalist critics of EA. If there is a more important and different point they’re trying to make, I’m missing it.
A lot of EAs make the case that the kind of systemic change they are pursuing is what they think is best. This includes typical EA efforts, like donating to Givewell-recommended charities. The argument is these interventions are based on robust empirical evidence, and are demonstrably so cost-effective, such that they improve the well-being of people in undeveloped or developing countries, and their subsequent ability to autonomously pursue systemic change in their own societies. There are also a lot of EAs focused on farm animal welfare they believe is the most radically important form of systemic change they can focus on. As far as I’m aware, there are no existing significant or prominent public responses to these arguments from a left-wing perspective. Any such sources would be appreciated.
A lot of anti-capitalist criticism of EA is how it approaches the eradication of extreme global poverty. In addition to not addressing EA’s arguments for how their current efforts are aiming at affecting systemic change in the world’s poorer/poorest countries, anti-capitalist critics haven’t offered up much in the way of concrete, fleshed-out, evidence-based approaches to systemic change that would motivate EA to adopt them.
Anti-capitalist critics are much likelier than EA to see the redistribution of accumulated wealth through private philanthropy as having been accumulated unjustly and/or through exploitative means. Further, they’re likelier to see relative wealth inequality within a society as a fundamentally more important problem, and thus see directly redressing it fundamentally higher priority, than most of the EA community. Because of these different background assumptions, they’re likelier to perceive EA’s typical approaches to doing the most good as insufficiently supportive of democracy and egalitarianism. As a social movement, EA is much more like a voluntary community of people who contribute resources privately available to them, than it is a collective political effort. A lot of EAs are active in political activity aimed at systemic change, publicly do so as part and parcel with their EA motivations, and are not only willing but actively encourage public organization and coordination of these efforts among EAs and other advocates/activists. That anti-capitalist critics haven’t responded to these points seems to hinge on how they haven’t validated the distinction between use of personal/private resources, and public/political resources.
There isn’t much more EA can do to respond to anti-capitalist critics until anti-capitalist critics broach these subjects. The ball is in their court.
I was trying to figure out why I dislike this post so much, and I think this is why—the assumption that people in EA are correct and everyone else is incorrect, combined with a lack of depth when explaining why certain things topics are criticized, and missing several important critiques. (Normally I don’t mind incomplete posts, but I think the tone combined with the list not being very good really bothered me.)
This is a misunderstanding. Perhaps you might re-read the OP more carefully?
Feel free to add to the list.
I would take your response more seriously if you hadn’t told everyone who commented that they had misunderstood your post.
If everyone’s missing the point, presumably you should write the point more clearly?
I just assume that EAs are correct about the EA things that we are doing. Of course that is a rational assumption to make. Otherwise you are just throwing yourself into a pit of endless self-doubt. It does not need to be argued that EAs know best about EA, just as it does not need to be argued that climatologists know best about the climate, psychologists know best about psychology and so on.
I think this is only true with a very narrow conception of what the “EA things that we are doing” are. I think EA is correct about the importance of cause prioritization, cause neutrality, paying attention to outcomes, and the general virtues of explicit modelling and being strategic about how you try to improve the world.
That’s all I believe constitutes “EA things” in your usage. Funding bednets, or policy reform, or AI risk research, are all contingent on a combination of those core EA ideas that we take for granted with a series of object-level, empirical beliefs, almost none of which EAs are naturally “the experts” on. If the global research community on poverty interventions came to the consensus “actually we think bednets are bad now” then EA orgs would need to listen to that and change course.
“Politicized” questions and values are no different, so we need to be open to feedback and input from external experts, whatever constitutes expertise in the field in question.
Yes, and these things are explicitly under attack from political actors.
When EAs are not the experts, EAs pay attention to the relevant experts.
This is not about whether we should be “open to feedback and input”. This is about whether politicized stances are harmful or helpful. All the examples in the OP are cases where I am or was, in at least a minimal theoretical sense, “open to feedback and input”, but quickly realized that other people were wrong and destructive. And other EAs have also quickly realized that they were being wrong and destructive.
We know them to be wrong in basic logical terms as attacks against EA—none of these things require that EA itself change or die, just cause areas or other ideas within EA. This point has been made repeatedly to the point of consensus.
You missed the point of the post. I’m making no judgment on whether e.g. anticapitalism or refugees are good or bad. If you do that, then you’re already playing the game of making sweeping judgments about society writ large, which I’m not doing. I’m simply analyzing the direct impact on EA capital.
Internal debate within the EA community is far better at reaching truthful conclusions than whatever this sort of external pressure can accomplish. Empirically, it has not been the case that such external pressure has yielded benefits for EAs’ understanding of the world.
It can be the case that external pressure is helpful in shaping directions EVEN if EA has to reach conclusions internally. I would put forward that this pressure has been helpful to EA already in reaching conclusions and finding new cause areas, and will continue to be helpful to EA in the future.
I haven’t seen any examples of cause areas or conclusions that were discovered because of political antipathy towards EA. The limiting factor is robust evidence and analysis of cause areas.
Veganism is probably a good example here. Institutional decisionmaking might be another. I don’t think that political antipathy is the right way to view this, but rather just the general political climate shaping the thinking of EAs. Political antipathy is a consequence of the general system that produces both positive effects on EA thought, and political antipathy towards certain aspects of EA.
Who has complained that EA is bad because it ignored animals? EAs pursued animal issues on their own volition. Peter Singer has been the major animal rights philosopher in history. Animal interests are not even part of the general political climate.
Looking at 80k Hours’ writeup on institutional decision making, I see nothing with notable relevance to people’s attacks on EA. EAs have been attacked for not wanting to overthrow capitalism, not wanting to reform international monetary/finance/trade institutions along the lines of global justice, and funding foreign aid that acts as a crutch for governments in the developing world. None of these things have a connection to better institutional decision making other than the mere fact that they pertain to the government’s structure and decisions (which is broad enough to be pretty meaningless). 80k Hours is looking at techniques on forecasting and judgment, drawing heavily upon psychology and decision theory. They are talking about things like prediction markets and forecasting that have been popular among EAs for a long time. There are no citations and no inspirations from any criticisms.
The general political climate does not deal with forecasting and prediction markets. The last time it did, prediction markets were derailed because the general political climate created opposition (the Policy Analysis Market in the Bush era).
It’s possible I’m wrong. I find it unlikely that veganism wasn’t influenced by existing political arguments for veganism. I find it unlikely that a focus on institutional decision making wasn’t influenced by existing political zeitgist around the problems with democracy and capitalism. I find it unlikely that the global poverty focus wasn’t influenced by the existing political zeitgeist around inequality.
All this stuff is in the water supply, the arguments and positions have been refined by different political parties moral intuitions and battle with the opposition. This causes problems when there’s opposition to EA values, sure, but it also provides the backdrop from which EAs are reasoning from.
It may be that EAs have somehow thrown off all of the existing arguments, cultural milleu, and basic stances and assumptions that have been honed for the past few generations, but that to me represents more of a failure of EA if true than anything else.
I find it obvious. What political arguments for veganism even exist? That it causes climate change? Yet EAs give more attention to the suffering impacts than to the climate impacts.
The mere idea that “there are problems with democracy and capitalism” is relatively widespread, not unique to leftism, and therefore doesn’t detract from my point that relatively moderate positions (which frequently acknowledge problems with democracy and capitalism) have better impacts on EA than extreme ones. The leftist zeitgeist is notably different and even contradictory with what EAs have put forward, as noted above.
People have focused on poverty as a target of charity for millennia, and people who worry about inequality (as opposed to worrying about poverty) are more stubborn towards EA ideas and demands.
There is an opportunity cost in not having a better backdrop. Even in a backdrop of political apathy, there would not be less information and less ideas (broadly construed) in the public sphere, just different ones and presented differently.
Seems plausible.