(Incorrectly) Overemphasizing Effective Careers
Effective altruism can be compatible with most moral viewpoints, and there is nothing fundamental about effective altruism that requires it to be a near-exclusive focus. There is, however, an analysis of effective altruism that seems to implicitly disagree, sneaking in a near-complete focus on effectiveness via implicit assumptions in analysis. I think that this type of analysis, which is (incorrectly) assumed by many people I have spoken to in the community, is not a necessary conclusion, and in fact runs counter to the assumptions for fiscal effective altruism. Effective careers are great, but making them the only “real” way to be an Effective Altruist should be strongly rejected.
The mistaken analysis goes as follows; if we are balancing priorities, and take a consequentialist view, we should prioritize our decisions on the basis of overall impact. However, effective altruism has shown that different interventions differ in their impact by orders of magnitude. Therefore, if we give any non-trivial weight to improving the world, then it is such a large impact, it will overbalance other considerations.
This can be illustrated with a notional career choice model. In this model, someone has several different goals. Perhaps they wish to have a family, and think that impacts on their family is almost half of the total reason to pick a career, while their personal happiness is another almost-half. Finally, in line with the financial obligation to give 10% of their income, they “tithe” their career choice, assigning 10% of the weight to their positive impact on the broader world. Now, they must choose between an “effective career,” or working a typical office job.
Factor | Family | Personal Happiness | Beneficence | Overall | ||||
Weight | 45% | 45% | 10% | 100% | ||||
Priority | Rating | Impact | Rating | Impact | Rating | Impact | Rating | Impact |
Effective Career | 3⁄10 | 3 | 2⁄10 | 2 | 9⁄10 | 1000 | 3.15 | 102.5 |
Office Job | 9⁄10 | 9 | 9⁄10 | 9 | 1⁄10 | 1 | 8.2 | 8.2 |
As the table illustrates, there are two ways to weigh the options; either rating how preferred the option is as a preference, or assessing its impact. The office job effectively only has impact via donations, while the effective career addresses a global need. The first method leads to choosing the office job, the second to choosing the effective career. In this way, the non-trivial weight put on impact becomes overwhelming. (I will note that this analysis often fails to account for another issue that Effective Altruism used to focus on more strongly, that of replaceability. But even assuming a neglected area, where replaceability is negligible, the totalizing critique obtains.)
The equivalent fiscal analysis certainly fails; dedicating 10% of your money to effective causes does not imply that if the cause is very effective, it requires you to give more than 10% of your money. This is not to say that the second analysis is confused—but it does require accepting that under a sufficiently utilitarian viewpoint, where your decisions weight your benefit against others, even greatly prioritizing your own happiness creates a nearly-totalizing obligation to others. And that’s not what Effective Altruism generally suggests.
And to be clear, my claim is not particularly novel. To quote a recent EA Forum post from 80,000 hours: “It feels important that working to improve the world doesn’t prevent me from achieving any of the other things that are really significant to me in life — for example, having a good relationship with my husband and having close, long-term friendships.”
It seems important, however, to clarify that in many scenarios it simply is not the case that an effective career requires anything like the degree of sacrifice that the example above implies. While charities and altruistic endeavors often pay less than other jobs, the extent of the difference is usually a fractional amount of income, not an order of magnitude difference. And charitable organizations are often as good as or better than commercial enterprises in terms of collegiality, flexibility, and job satisfaction. Differences in income certainly matter for personal satisfaction, but for many people, effective careers should be seen as a reasonable trade-off, and not as either the only morally acceptable choice, or an obviously inferior choice.
I think that many people who are new to EA, and those who are very excited about it, sometimes make a mistake in how they think about prioritizing, and don’t pay enough attention to their own needs and priorities for their careers. Having a normal job and giving 10% of your income is a great choice for many Effective Altruists. Having a job at an effective organization is a great choice for many other Effective Altruists. People are different, and the fact that some work at EA orgs certainly doesn’t prove they are more committed to the cause, or better people. It just means that different people do different things, and in an inclusive community focused on effectiveness and reasoning, we should be happy with the different ways that different people contribute.
I don’t really disagree with you (ex: 2016, 2022) but have you seen EA writing or in-person discussion advocating choosing an impactful job where you’d rate your happiness 2⁄10 over a less impactful one where you’d rate it 9/10?
I have seen a few people in EA burn out at jobs they dislike because they feel too much pressure and don’t prioritize themselves at all, and I’ve seen several people trying to find work in AI safety because it’s the only effective thing to do on the issue that they were told was most important, despite not enjoying it. Neither of those is as extreme as the notional example, but both seem to be due to this class of reasoning.
(never spoke about this with anyone, but) I think about this like the classic balance between utilitarianism and deontology, “Go three-quarters of the way from deontology to utilitarianism and then stop”.
I mean: Yeah, having a high impact is important, but don’t throw out “enjoying life” and other things that might not be easily quantifiable. We’re only humans, if we try to forcefully give one consideration 1000x the weight of another, it totally might mess up our judgement in some bad way.
[this doesn’t feel so well phrased, hopefully it still makes some sense. If not, I’ll elaborate]
I think this is roughly right.
(That said, it’s a balance, and three-quarters of the way to 100% EA dedicate-ism will sometimes feel and look quite a lot like crazy sacrifice, IMO.)
Yeah I have no idea if 75% is the correct constant. I mainly read this as “definitely not 100% and also not 99%”
[not a philosopher]
Yes—this post comes drafting a larger post I’m writing trying to deconstruct EA ideas more generally, and the way that EA really isn’t the same as utilitarianism.
For EA s starting out, there should be some focus on just doing good and not necessarily trying to aggressively optimize for doing good better, especially if you don’t have a lot of credibility in that space.
Also, at the end of the day EA is a just a principle/value system which you can rely on in pretty much any career you end up making. The part about EA being a support system and a place to develop your values is often left out and as a result a lot of early stage exicted EAs just want to “get into ” or “get stuff” out of EA
I think that “focus on just doing good and not necessarily trying to aggressively optimize for doing good better” is the wrong approach. Doing something to feel like you did something without actually trying is, in some ways, far worse than just admitting you’re not doing good at present, and considering whether you want to change that.
And “A is a just a principle/value system which you can rely on in pretty much any career you end up making” sounds like it’s missing the entire point of career choice.