Legality as a Career Harm Assessment Heuristic
A question many people in the effective altruism movement have struggled with around earning to give is how to handle potentially harmful careers. It’s obviously self-defeating if you cause more harm in earning your money than the good it does when you donate it, but we want a higher threshold than that. As humans we need to have approaches that account for our self-serving biases, where we tend to underestimate the harm we cause and overestimate the good we do. Additionally, some kinds of harm (ex: murder) do not seem like the kind of thing you ought to be able to “cancel out” through donation, even if the donation clearly has larger benefits (ex: saves vastly many lives).
Unfortunately for most jobs, even questionable ones, the social impact is very hard to work out. Consider someone deciding to go into the oil industry: how much would they contribute to carbon emissions, after considering the oil company’s elasticity of labor and the elasticity of production? Does cheaper oil displace even more carbon-intensive coal? How likely are extreme climate outcomes? Is the benefit of cheaper energy in lifting people out of poverty enough to make it positive on its own? Making a high-quality impact estimate for a career is a huge amount of work, and there are a lot of potential careers, especially when you consider that some roles in the oil industry might be far more replaceable than others.
What should we do in cases where the benefits seem much larger than the harms, but the harms are still significant? A potential rule I’ve been kicking around is, “don’t do work that is illegal, or that would be illegal if the public knew what you were really doing.” The idea is, we have a system for declaring profitable activities with negative externalities off limits, one that is intended for the more common case when someone is keeping what they earn for their own benefit. But we can’t just use “don’t do work that is illegal” because our legislative system can be slow to react to changes in the world or information that isn’t yet widely available. For example, if most people understood the cost-benefit tradeoffs in research to assess the pandemic potential of viruses or create very powerful AI systems I expect both would be prohibited.
It is, however, only a heuristic. For example, it gives the wrong answer in cases where:
Crafting a law prohibiting the versions of an activity that are net negative would unavoidably cause people to stop doing closely related beneficial activities.
The law is wrong and carefully considered civil disobedience is needed to convince others.
I expect there are other areas where this rule permits careers altruistically-minded people should avoid (even if the benefits seem to dramatically outweigh the costs) or rejects ones that are very important. Suggesting examples of either would be helpful!
Choosing a career is the kind of large-consequences decision where going beyond our heuristics and thinking carefully about outcomes is often warranted. Still, I see a bunch of value in sorting out a framework of general rules and common exceptions, where people can think through about how their particular situation fits.
Comment via: facebook, lesswrong, the EA Forum, mastodon
I think 80k have the best article on this subject and I don’t think you referred to it in the above? If I’ve interpreted your post and their article correctly, I think they’re more restrictive than you are here.
Said 80k article: https://80000hours.org/articles/harmful-career/
Thanks for linking that one! In drafting this the two 80k articles I found were the two older ones I linked above: What Are The 10 Most Harmful Jobs and Show Me the Harm. Is It Ever OK to Take a Harmful Job in Order to do More Good? is a much more detailed article, and I wish I’d seen it before writing this!
(I’m not sure why I didn’t find it before—looking now it’s in the top few results for most reasonable searches)
Aha yes I saw that you’d linked to those two older ones! Given that you had, I was surprised you’d missed this one :)