I blog about political/economic theory and moral philosophy.
Sam Battis
A Newcomer’s Critique of EA—Underprioritizing Systems Change?
Are systems change nonprofits effective? A shallow estimate of the efficacy of nonprofit legal work
I suspect that the social cost of making “I have [better/worse genetics] than this person” a widespread, politically relevant, and socially permissible subject outweighs the potential benefits of policies like subsidized abortions for people addicted to drugs and special incentives for educated women to have kids.
With regard to targeted abortion subsidies, what about the risk of reanimating the “abortion is eugenics” argument against its legality, particularly in the US, where abortion has been banned in many states? If you believe that abortion’s legality has had very positive genetic effects, then shouldn’t preserving that legality be an extremely high priority, and the political cost of proposing this policy prohibitive?
If you want to make abortions more accessible, why not make them free for everyone, making it a better option for people who might not be able to face the short-term financial burden of its cost, reducing the odds of it being banned by reinforcing the idea that it is is a right, and avoiding the backlash that a targeted subsidy would unleash?
It seems like ensuring that everyone gets to decide when they are best prepared to have and raise kids, if ever, has such incredibly high positive externalities that if anything there is a strong argument that abortion should be free already. The same goes for condoms, birth control pills, and IUDs.
Why not advocate for new, universal public goods, rather than policies that unnecessarily risk negative social and political impacts?
Worth noting that although those factors likely increase the expected strength of the relationship between money and happiness, when it comes to interpreting that strength, there are factors potentially reducing the proportion of the relationship that can be explained by “more money causes more happiness:”
Reverse causality (Being happier plausibly makes you earn more money through various social and health impacts)
Confounding variables (E.g. being a hard worker makes you happier, and being a hard worker just happens to make you more money on average)
Also, if you perceive success (i.e. status) as being strongly associated with having higher income, then the actual mechanism by which higher income raises your happiness may be through a status gain, rather than a consumption increase. And as status is a positional good, this would be a reason to down-revise our expectation that increasing a person’s consumption at these levels will actually increase a society’s net happiness.
I’m pretty new to the movement and generally have never done research at a high formal level, so I suppose expertise. Is there a link somewhere to a sort of guide for doing research at the level of detail expected?
It’s true that you didn’t technically advocate for it, but in context it’s implied that subsidies for abortion for people who are addicted to drug use would be a good policy to consider.
“We are not going to stop hearing about eugenics. Every time someone tries to call it something different, the “e” word and its association with historic injustice and abuse is invoked to end the discussion before it can begin.
When someone says that screening embryos for genetic diseases, giving educated women incentives to have children (like free child care for college educated women), or offering subsidized abortions for women addicted to drugs is “eugenics” they are absolutely using the term correctly.”
I accept that the idea “abortion is eugenics” is already advanced by some conservatives. However, I think that the policy of targeted abortion subsidies would convince more people that “abortion is eugenics,” and I think that this would make it easier to ban abortion.
I think the fact that Israel already has a very different cultural environment regarding genetic interventions means that those examples of targeted subsidies may well be much more controversial in other countries.
I’m glad you agree on that last point.
For me it’s been good to make a habit of looking for the least controversial policy that achieves desired goals. I often discover reasons that the more controversial options were actually less desirable in some way than the less controversial ones. This isn’t always the case, but in my experience it has been a definite pattern.
Hi Brad,
The counterfactual is definitely something that I think I should examine in more detail.
Agreed that marginal effect would be fairly logarithmic and I probably should have considered the fact that there is quite a lot of competition for employment at Earthjustice (i.e. need to be top 0.001% of lawyers to have counterfactual impact).
I am pretty completely convinced by the argument that seeking to work for Earthjustice is worse than ETG actually, so I might go and make some rather sweeping modifications to the post.
I think that the exercise does at least stand as a demonstration of the potential impact of systems change nonprofits with new/neglected focus and that Earthjustice is a success story in this realm.
Do you have a high level of confidence that Earthjustice is too large/established for it to compete with funding new and/or neglected projects?
I agree that to the extent that EA engages in policy evaluation or political/economic evaluation more generally, it should use a sentient-experience maximization framework, while discarding the labels of particular political theories, in the way that you described. And I think that so far every discussion I’ve seen of those matters in EA follows that framework, which is great.
With regard to specific arguments about post-politics:
I thought you made a strong case for post-politics in general, but arguing that a specific economic strategy is the best possible beyond all doubt is much more difficult to defend, and besides, does not seem very post-political. In general, a post-political person might argue for any economic strategy under the sun as optimal for sentient beings, though of course some arguments will be stronger than others.
Also, regardless of the systems they believe to be optimal, post-political people should be sure to entertain the possibility that they, others, or a given group or polity are actually too post-political—not having the optimal amount or type of cultural orthodoxy/dogma and being unwilling to address that issue.
This may come into play when an individual or group’s conviction in rights and other deontological tools becomes too weak, or the deontological rules and norms they follow are worse than competing options.
After all, an orthodoxy or political culture beyond post-politics is necessary for “tie-breaking” or producing decisions in situations where calculation is inconclusive. Some political culture beyond post-politics will inevitably sway people in such situations, and it is worth making sure that that political culture is a good one.
An individual post-political thinker therefore may embrace advocacy of a certain political culture because they think it is so valuable and under-utilized that advocating for it is a more efficient use of their resources than advocating for post-politics.
Generally I would say most people and institutions could stand to be more post-political, but I am not sure whether post-politics is currently a better advocacy target than other cultural/political movements.
If one was to advocate for such a movement, I’d guess the best way would be to create a political forum based on those principles and try to attract the people who like participating in political forums. Then the goal would be to make sure the discourse is focused on doing the most good for the most people, with rigorous evidence-based breakdowns for particular policies. This might be a decent use of time given that this post-political approach could improve thousands of people’s decisions as they relate to systems change.
If something like this was created, I would recommend adding a system for users to catalogue, quantify, and compare their political positions, including their degree of confidence in each position. The capability to quickly and easily compare political positions between individuals seems like a very fast way to advance the accuracy of individuals’ beliefs, especially in a community dedicated to strong beliefs lightly held and finding the policies that do the most good.
Race for the galaxy is an excellent game.
Gratefulness can sound cheesy but it’s one of the most scientifically-backed ways to make humans happy. I’ve found that a nice ritual to do with the people you live with is to go around the table and have each person say something they’re grateful for before eating dinner together.
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:
-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem
-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one’s own well-being or moral character through exercise of a “moral muscle”
-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition)
To me, naive application of utilitarianism often leads to underestimating these considerations.
I think it is potentially difficult to determine how good the average doctor is in a particular place and how much better one would be than the average, but if one could be reasonably confident that they could make a large counterfactual impact on patient outcomes, the impact could be significant. The easiest way to be sure of these factors that I can think of would be to go somewhere with a well-documented shortage of good doctors, while trying to learn about and emulate the attributes of good doctors.
Being a doctor may not be one of the highest impact career paths on Earth, but it might be the highest impact and/or the most fulfilling for a particular person. High impact and personal fit/fulfillment are fairly highly correlated, I think, and it’s worth exploring a variety of career options in an efficient way while making those decisions. In my experience, it can be very difficult to know what one’s best path is, but the things that have helped me the most so far are experiences that let me get a taste for the day-to-day in a role, as well as talking to people who are already established in one’s prospective paths.
True. I think they meant that it’s plausible humans would convert the entire population of cows into spare parts, instead of just the ones that have reached a certain age or state, if it served human needs better for cows to not exist.
I agree that activism in particular has a lot of idiosyncrasies, even within the broader field of systems change, that make it harder to model or understand but do not invalidate its worth. I think that it is worthwhile to attempt to better understand the realms of activism or systems change in general, and to do so, EA methodology would need to be comfortable engaging in much looser expected value calculations than it normally does. Particularly, I think a separate system from ITN may be preferable for this context, because “scope, neglectedness, and tractability” may be less useful for the purpose of deciding what kind of activism to do than other concepts like “momentum, potential scope, likely impact of a movement at maximum scope and likely impact at minimum or median scope/success, personal skill/knowledge fit, personal belief alignment” etc.
I think it’s worth attempting to do these sorts of napkin calculations and invent frameworks for things in the category of “things that don’t usually meet the minimum quantifiability bar for EA” as a thought exercise to clarify one’s beliefs if nothing else, but besides, regardless of whether moderately rigorous investigation endorses the efficacy of various systems change mechanisms or not, it seems straightforwardly good to develop tools that help those interested in systems change to maximize their positive impact. Even if the EA movement itself remained less focused on systems change, I think people in EA are capable of producing accurate and insightful literature/research on the huge and extremely important fields of public policy and social change, and those contributions may be taken up by other groups, hopefully raising the sanity waterline on the meta-decision of which movements to invest time and effort into. After all, there are literally millions of activist groups and systems-change-focused movements out there, and developing tools to make sense out of that primordial muck could aid many people in their search to engage with the most impactful and fulfilling movements possible.
We may never know whether highly-quantifiable non-systems change interventions or harder-to-quantify systems change interventions are more effective, but it seems possible that to develop an effectiveness methodology for both spheres is better than to restrict one’s contributions to one. For example, spreading good ideas in the other sphere may boost the general influence of a group’s set of ideals and methodologies, and also provide benefits in the form of cross-pollination from advances in the other sphere. If EA maximizes for peak highly-quantifiable action, ought there to be a subgroup that maximizes for peak implementation of “everything that doesn’t make the typical minimum quantifiability bar for EA”?
I think that countries for whom reducing inequality was almost a religious conviction, such as the USSR, had terrible governments which should never be replicated. However, I think that countries that invest in their public sector and social safety net more so than the US or the UK have very good track records today. There’s always the classic Scandinavia example and I think the UK’s rocky economic performance over the past twenty years has a lot to do with its push to privatize and reduce provision of public services. Germany does fairly well although they do tend to try to undercut the rest of the EU with their (more permissive) labor regulation. Impossible to tell for sure, but I think Japan would have had an even rougher go of it if it engaged in as little public investment as the Anglophone countries. The biggest issue for many of these countries is birth rate, although it’s worth noting that Scandinavia outperforms the rest of the EU on this except France and way outperforms East Asia, and their generous maternity/paternity leave is likely part of that.
Providing these public goods does indeed require persuading billionaires to give you money, and there is always the issue of capital flight. Thankfully, countries like the US and the UK are often the recipients of that capital flight because they have a large population, speak English, and have lots of fun things for rich people to do, plus lower taxes. So I’m sure that some amount of capital flight or attempts at tax evasion would result from these countries raising obligations on their richest citizens, but I think if anything it is likely to be less dramatic than what most of Europe has suffered for their welfare states, and I think the decision was still a net positive for those countries. In my reckoning, combining Northern European institutions with America’s birth rate and dynamic multiculturalism would probably result in even greater economic growth than America currently enjoys.
It’s all definitely up for debate though. Thanks for the response!
I don’t believe Facebook’s structure and people’s prior associations with the quality of discussion that occurs on Facebook would enable rational debate at the level of the EA forum, but on any platform, I would agree that if a line in the sand is crossed and discussions of specific policies become conceived of as “Politics”, and tribalism creeps in, the results are usually quite bad.
I can’t imagine that political tribalism would fly on the EA forum, although of course it is necessary to be vigilant to maintain that. Indeed, if I were to rewrite that post today I would revise it to express much less confidence in a particular view of global systems, and focus more on the potential for thinking about global systems to offer opportunities for large impacts.
I think there is evidence EA is capable of doing this without damaging epistemics. It is currently widely accepted to talk about AI or nuclear regulations that governments might adopt, and I haven’t seen anything concerning in those threads. My argument is essentially just that policy interventions of high neglectedness and tractability should not be written off reflexively.
Earthjustice and other law groups (there’s a YIMBY Law group as well that is probably less impactful but at least worth looking into) are nice because they improve de facto systems, but don’t need to engage with the occasional messiness of overt system change. Instead, they ensure local governments follow the laws that are already in place.
I think that for consequentialists, capability-maximization would fall into the same sphere as identifying and agitating for better laws, social rules, etc. Despite not being deontologists, sophisticated consequentialists recognize the importance of deontological-type structures, and thinking in terms of capabilities (which seem similar to rights, maybe negative rights in some cases like walking at night) might be useful in the same way that human rights are useful—as a tool to clarify one’s general goals and values and interpersonally coordinate action.
Is the nonprofit lawyer really making a lower impact per hour worked compared to the earning-to-give corporate lawyer? This could be a good case study of system change efforts vs direct donation.
Let’s say the lawyer is donating $200,000/year less than they would have if they stayed at a for-profit firm (donating $200,000 requires an extremely high percentile conviction in the efficacy of effective donation and something like top 1-5% earnings for a lawyer, but I’ll use this to be conservative), but now is working on enforcing environmental legislation.
$4500 to save a life with AMF in Guinea according to Givewell: $200,000/$4500 = 45 lives saved per year from malaria. So in a twenty-year career at the nonprofit, say, the lawyer would have to accomplish good equivalent to saving 900 lives.
The easiest way to convert impact to lives is probably estimating the lives lost for a given amount of carbon emitted. Thankfully, this has been done. They found that 4434 metric tons of carbon saved is a life saved. So the lawyer needs to save 3,990,600 tons of carbon to hit equivalence.
Looking through the “recent wins” page of Earthjustice, the largest environmental justice employer, is a case that is estimated to have saved 970,000,000-1,800,000,000 tons of carbon by 2050. Earthjustice can’t take full credit for this—they were just part of a team suing, along with many city and state governments. Let’s say their expertise was responsible for 1% of the win. Taking the midpoint of the carbon estimate, 1,385,000,000 was saved, of which Earthjustice was responsible for 13,850,000.
This means that if every lawyer at Earthjustice (there are 200+, so we’ll estimate 299 to be super safe and account for other workers who are supporting them) had a win that big just once over their twenty-year career, they would each be outperforming $200,000/yr donated by a factor of 4.
If this singular recent win was the whole impact of Earthjustice for 2022, how would that stack up, divided among 299 lawyers/personnel?
Well, that’s 46,321 tons of carbon, per lawyer, per year. Over 20 years that’s 926,421 tons. So, rather poor. That’s about a fourth of the impact of the $200,000/year. Equivalently, Earthjustice needs to put out 4 wins of that magnitude a year to justify its existence through proactive action alone (though a decent percentage of their actual impact is in deterrence, I’d imagine). Here’s the recent victories page if you want to judge for yourself how large the impact of various wins are. (EDIT: I found at least one other 2022 win with comparable carbon impact, see comments below).
Overall, I’d say it’s quite plausible—even probable, after further consideration (see the comments below)--that the environmental lawyer would have equivalent or greater impact. To be fair, OP was written when the cost estimate of saving a life with GiveWell was much lower.
- 10 Feb 2023 21:16 UTC; 45 points) 's comment on There can be highly neglected solutions to less-neglected problems by (
Thank you for your response! I wasn’t aware of those EA political action funds or the fact that some EA groups do local work.
1. I agree wholeheartedly that rich country politics are a saturated field and EA should avoid conventional engagement with them. On its grandest scale, I believe EA should always be about more nonpartisan direct giving and existential risk prevention, because these things are very quantifiably good, and that’s extremely important.
My argument is more that building local EA groups through engagement with local issues could expand the movement and induce greater investment from those already on board, by giving your average EA with a job not directly related to the highest-impact fields another way to engage with the gospel of effective doing that unites EAs. This average EA might currently spend 20 money-units on a cocktail of GiveWell and existential risk donations and 20 time-units on deciding what to donate to and engaging with the local group. My hypothesis is that if local EA groups pursued a dual strategy of local and international work, discussing the highest-impact opportunities in each separately with an understanding that international work provides more bang-for-buck but there’s still effective ways to spend time on local issues, that average EA would still spend 20 money-units on Givewell and existential risk, but they might add 1 money-unit on seed money for a local political campaign and double their time-unit investment because there is now a local project that the group decided to work on. Plus EA’s fame grows as a movement that is most dedicated to the highest international good, but is nonetheless willing to put in some local effort. I think it Feels Good to do something local and outsiders will Feel Good about EA if it accomplishes some useful local thing.
2. I want to steer super clear of DSA-like stuff. I am joining EA and not the DSA for a reason—the quality of meta-debate, introspection, and ideological diversity of EA make it far more likely to have a long-term positive impact on the world, in my opinion. I think EA’s reverence for quantification and transparency is also pretty unmatched. Plus I think EAs are correct, from a moral-calculus perspective, to spend their energies building a movement with more expansive goals than political groups and more focus on things like direct aid. I think local projects would have to target non-incendiary policies, if they targeted policies at all. YIMBYism or improving voting accessibility or advocating for public parks are the kind of policies that seem to be in the sweet spot of impactful, maybe a bit neglected, and not likely to alienate anyone (maybe some YIMBY policies are too dangerous in this regard though). On the individual or really small group level EAs might do little things like getting permission from the city to build a small bridge over a neighborhood creek or making one of those little book libraries from scratch. I think what I’m envisioning is just a bimodal culture of care where you put like 95% of your philanthropic money into international efforts but much of your philanthropic time goes towards bonding with EAs on local projects. Maybe what I’m describing isn’t actually that far off from some local EA groups. As far as I can tell, it’s pretty different from the one where I live, though.
I think even with regard to local policy-related issues, EA would do it better than a group like DSA, by identifying the policies that are most universally-desirable, and having the ability to ignore the political sphere if no impactful opportunities arise.
3. I have to agree that poverty in the UK is a different and altogether less pressing issue than poverty in Zimbabwe. I think that was probably the weakest segment of my argument. I do, however, think that the general compression of the middle classes of these countries have enough negative psychological and social impacts as to be concerned for the wellbeing of both the inhabitants of these countries and those countries’ institutions. If things aren’t going well in rich countries, how can we fix the world by making poor countries into rich countries? (Obviously it’s more efficient to focus on poor countries, but I think we should at least make symbolic or local efforts in rich countries).
4. I’m not sure how to improve the institutions of developing countries either, to be honest, but given how impactful it seems likely to be, I think EA should look into how it might be done. I suspect that at least some high-impact opportunities would be revealed by the search. To your other point, I think making the US less unequal and more democratic would actually have extremely dramatic impacts on future world history. From pure GDP and military numbers, it seems crucial that it perfects its institutions and is a global steward for good governance, especially in a world with reasonably strong autocracies that would like to see liberalism rot from the inside out. Good governance is a subject for debate, but assuming one’s assessment is accurate, if money could effectively be spent on improving US governance, it would probably be one of the most impactful causes in the world to focus on. Alas, it is also the most crowded market on earth and it is probably only worthwhile to spend money on extremely specific overlooked efforts to improve governance. For example, if there was a really promising, really transparent movement run by EA-aligned people to give everyone Election Day off, I might give it a bit of seed money.
5. That sounds great. Will have to investigate further.
6. I agree that “reducing inequality” is not an end that inherently justifies itself, and certainly wouldn’t make a good prospective tenet of EA. On the other hand, although this obviously is still up for debate, I think there’s pretty good evidence for the long-term economic and social benefits of a larger public sector and social safety net than the US and the UK currently have. Those are the kinds of policies I could see EA advocating for, at least from my relatively uninformed perspective about what EAs consider too political for the scope of the movement. I agree that ideological diversity is inherently good for a movement. I think if there was some apparatus for community endorsement of a policy, requiring 80% consensus would be a pretty good protection against alienation, but I could be wrong about this.
7. Really cool stuff, this is the kind of thing I was envisioning for selective engagement with the political system. I think it’s good to have this stuff on the side as long as it doesn’t come to dominate too much, especially not the “international” side of the local/international focuses. I’m currently using a 75/75/75 rule for my own donation where 75% goes to immediately-impactful GiveWell aid, 75% of the remaining 25% goes to existential risk, 75% of the remaining 6.25% goes to improving governance, and the remaining 1.56% goes to pet/local projects. I think I will be donating to these funds as part of my governance donation, particularly the YIMBY one as it seems underfunded to me.
Again, thanks for the reply!
Yeah honestly I don’t think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the “good” aimed at.
I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.
“Directness” inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an “all else being equal” scenario is impossible.
Related to initial deontologist point: when your average person expresses a “directness matters” view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
I also think that EA sometimes dismisses categories of problems out of an assumption that most solutions currently proposed to those problems are either not neglected or have a low expected value, despite the likelihood that high-value opportunities are lurking amidst the chaff.
After all, EA’s original focus was sorting through the labyrinth of ineffective direct global health and poverty reduction interventions. In theory, we should now be sorting through other broad fields like public policy, climate change, and so on, to find interventions comparable to the best direct aid/global development opportunities.
In the climate change realm, environmental law groups like Earthjustice appear on paper to be competitive with top GiveWell nonprofits. Much more thorough research would be needed, but napkin calculations seem promising.