How avoiding drastic career changes could support EA’s epistemic health and long-term efficacy.

This post was written for Lizke, Fin and Joshua’s EA Criticism and Red Teaming Contest and represents my views on a tension I perceive between EA’s desire to muster professional ranks in high-impact cause areas and its strong rootedness in shared epistemic processes and personal epistemic virtues. To the best of my knowledge, all views (and certainly all mistakes) here are my own.

Within Effective Altruism, a great deal of time, effort and resources are spent trying to convince young people to change their careers to reduce suffering in the world more effectively. While this is a noble goal, I think most newcomers to EA should not drop their pre-existing career aspirations to pursue EA-aligned careers immediately, even if they feel that those cause areas are important. I worry that the way EA career consulting organizations shuttle newcomers towards working in current highly-effective cause areas (pandemic safety, AI safety, animal suffering, global poverty) allows newcomers to avoid learning how to personally embody the epistemic virtues which make EA a basically rational and trustworthy movement. Ideally, these virtues include its willingness to self-criticize (typified by this Red-Teaming contest), its assent to shared rational frameworks (such as the importance-tractability-neglectedness analysis), the high value it places on epistemic self-trust[1] and the independent verification of information (i.e., The Giving Game), its willingness to be candid about its uncertainties (such as about the likelihood of existential catastrophes taking place over certain time horizons), and its theoretical willingness to change core positions and strategies in response to new information. While it’s probably effective in the near term for EA orgs to guide people into highly effective careers without requiring them to take part in all of the deeper reasoning behind those choices, it’s also probably harmful to the long-term epistemic health of EA for many people to join its organizations without having firm roots in these epistemic virtues— particularly the virtue of epistemic self-trust. To protect EA’s epistemically-healthy community and keep its work effective as global circumstances change, I think its leadership should help novice EAs develop epistemic self-trust and to become deeply familiar with EA’s importance-tractability-neglectedness framework before guiding them to change their fields of work.

Let me unpack the example that motivates this post. When I was at EAGxBoston 2022, I met the founder of an AI Safety career consulting service who told me that most of his clients were young EAs who just wanted to be told what to do to get a job in AI safety. He said they didn’t feel capable of figuring out what parts of AI Safety needed help, and he was surprised at how many young EA’s wanted to work in AI safety who simultaneously felt totally lost when trying to understand it. I think it’s suspect that so many early-career EA’s could feel enormous enthusiasm for AI safety work and yet, simultaneously, feel total confusion about the nature of that work. It makes me think that most of these EA’s primarily wanted to change their careers because AI Safety is an attractive and highly-interesting area, and not because they had done a importance-tractability-neglectedness analysis of the issue to determine that it was the most important one. If most of them actually had done this analysis, I think that having explored the tractability of the problem would have clarified if and where AI Safety needed their additional help. But, since it seemed like most of this person’s clients did not feel able to trust their own judgments of the field and desired point-blank instructions for starting their careers there instead, I think probably most of their enthusiasm for AI Safety came from (something like) speculating about dystopian AI futures instead of soberly analyzing the field’s specific needs and its importance relative to other cause areas.

If we don’t train ourselves to stay very close to the ground truth and hard data about suffering and its causes, our lack of epistemic self-trust will undercut our capacity to reason about and respond effectively to world problems in the long term. As world circumstances and global priorities change, EA’s who lack epistemic self-trust won’t feel confident enough to assess the impact of their projects, but will defer responsibility for “checking” their own effectiveness to others and superiors in the movement, organization or community they’re a part of. They probably will also get attached to (something like) AI Safety as a fixed, “this-is-the-most-important-cause-area” instead of remaining attuned to ongoing changes in world circumstances and thus remaining agile enough to respond to them by reorienting their main professional focus, if that’s what the circumstances really required. If we don’t make sure EA’s can, at some level, constantly check the assumptions and conclusions of their peers and superiors with a measure of self-confidence, EA will become liable (if it is not already) to waste resources on popular projects that are approved and conducted by increasingly small groups of EA leaders, or approved by epistemic processes that not everyone can, or feels empowered to, examine and ultimately (one would hope) affirm as rigorous and trustworthy. To avoid losing the epistemic health our movement and to keep EA effective as global circumstances change, we should instead focus on teaching newcomers to ask critical questions, to seek answers that make sense to them, and to thoroughly analyze problems in terms of their importance, tractability and neglectedness before coaching them on how to begin practically working in highly-unfamiliar fields of work.

Similarly, I think most newcomers to EA shouldn’t immediately want to change their careers to fit its current cause areas, especially if they’ve been preparing to contribute something completely different to society. I think it’s worth more to EA, and to the world, for people to be value-aligned and capable of setting their own priorities in diverse areas than for everyone to work within a fixed set of cause areas without fully understanding the value or impact of their efforts. Luckily, EAs’ importance, tractability and neglectedness analysis can be applied within most limited domains and have a powerful impact—which means that EAs who stuck to their original career paths could apply this analysis to make their existing projects more effective and efficient while still leveraging their expertise and experience in their original fields. In the long run, these people could gradually shift into career areas that were possibly more effective in a global, existential sense (such as within AI Safety), but by this point they would have already formed strong enough foundations in EA principles to embody deeply critical and epistemically self-trusting stances in those fields to not feel conceptually unmoored when thinking about them any longer. Having stronger epistemic stances would allow them to navigate the considerably more abstract and uncertain problems which characterize AI Safety and other conceptually-complex EA cause areas without following others blindly or neglecting to do their own due diligence before making important (and perhaps even existentially consequential) decisions within them.

One counterpoint to the specific example of AI Safety I’ve been talking about would argue that the complex nature of this field’s material might make it so challenging to understand that only field experts could ever hope to determine which of its projects were worthwhile, and that anyone else who tried to formulate or lead projects by themselves would seriously risk stalling or even harming the field’s progress. If this were true, then we might re-interpret the EA’s who were asking career consultants where they should start working within AI Safety not as having been mindlessly deferential to authority, but as having intentionally put their own (hopefully robust, independent and deeply rooted) knowledge and decision-making capacities on temporary standby after recognizing that, in this specific case, “forgetting” what they knew and doing exactly what experts instructed would be the most effective (and least risky) way to advance AI Safety’s cause.

But even if this were true, I think that the sheer number of EAs who are currently interested in AI Safety implies that this approach to AI Safety could threaten EA’s long-term epistemic health. As I learned from a friend at EAG in San Francisco this year, about 50% of the conference’s attendees and presenters had a central focus on AI Safety— 50%! If such a large proportion of EA’s feel they must deliberately disengage from their own knowledge and carry out the instructions of others for AI Safety to proceed effectively, then the epistemic virtues of our movement may be undergoing a process of dilution as up to half of its membership neglect to practice them while engaging with (and thereby instantiating) EA and its organizations. The dilution of EA’s epistemic virtues through neglect would constitute a threat to its future epistemic health, since fewer and fewer people would remain present in its organizations to steward an epistemically-rigorous and deeply EA-aligned ethos into their future.

Also, since many of EA’s current cause areas like AI Safety offer highly-existential and exciting opportunities to speculate about unprecedented directions for the future of the world, the natural fervor surrounding them may shroud the epistemic failings of people who seem to be working on them with great passion and dedication. For example, it would be easy for someone who enjoys millenarian speculation but who has not developed either epistemic self-trust or a particularly deep acquaintance with EA’s importance-tractability-neglectedness framework to passionately and concernedly blend in with the leading voices in AI Safety today. It is problematic if EA is amassing lots of AI Safety enthusiasts whose passion derives more from the spectacle of dystopian AI futures than from the sober analysis of that possibility’s likelihood because forthcoming AI Safety projects would have fewer and fewer contributors capable of independently verifing their basis in reasons and evidence. Simultaneously, the uninformed contributors to these projects could detract from AI Safety efforts as a whole, because expressing their concerns about unaligned AI to outside audiences without having strong, well-practiced evidentiary grounds to justify them could serve to de-legitimize those (very possibly real, valid and globally important) concerns.

In my view, EAs’ simultaneous excitement about and unfamiliarity with AI Safety suggests that many EAs believe (perhaps implicitly) they must choose between developing epistemic self-trust and deep familiarity with the epistemic norms of EA on the one hand, or beginning to do what EA orgs and leaders have determined to be most effective on the other. I think this choice is a false one: I don’t think it’s particularly effective for people not to work on pressing global issues, and I also don’t think it’s valuable for them to work on them without fully embodying the epistemic principles and norms which produced those issues’ high-priority status in the first place. Instead of these two options, I think AI Safety should retain a smaller and more epistemically self-confident group of EA’s for its projects, while less-confident others who still recognize the significance of AI Safety work should work to develop their confidence with EA’s epistemic norms and their own sense of epistemic self-trust. Doing this work—learning to question claims of field experts as one gradually develops expertise one’s self, asking for evidence and patiently working through it, knowing when a domain of inquiry is, for the moment, out of one’s own conceptual grasp, and knowing what domains lie, also for the moment, squarely within one’s own epistemic capability—would lay the foundation for them to approach with open eyes and open minds the challenges humanity will continue to face as time passes and our circumstances continue to evolve, whether they be related to AI Safety or to something entirely different when the time comes.

Stemming from this critique, I think one positive change for EA would involve shifting the focus of its conferences away from career-focused activities and more towards things like collaborative sessions for thinking through world problems in deep, thorough and holistic ways, workshops for practicing and developing epistemic skills and habits, and opportunities for learning how to use different research methods to set effective priorities and personally verify the impact of projects and charitable interventions. Since the demographic for EA conferences seems heavily made up of early-career or college students who are interested in learning more about EA and its cause areas, I think it is important for EA’s leaders to consider the possibility that sending young EA’s off to work on high-impact projects before they have sufficiently developed their capacities for epistemic self-trust and familiarity with EA’s core analyses may limit EA’s future efficacy and hurt its epistemic health. While I don’t think we should stop sharing career-focused ideas at EA events, I do think using them to focus more on inculcating EA’s epistemic virtues would prepare their young EA attendees for future successes, reinforce the future efficacy of EA as a multi-generational movement, and fundamentally affirm that EA is a conceptual approach to doing good that can be freshly applied by anyone at any point in our future, not a fixed set of professional options for doing good that are delimited to resolving only the problems of our present-day world.


[1] The way I’m using it, epistemic self-trust means having confidence in your own capacity to perceive and reason about states of affairs in order to understand what is true about them. I’ve heard it said before that people should aspire to be “epistemic vacuum cleaners” who verify the truth of received information and weigh its worth using their own judgment before representing it as true to others.

No comments.