If EA ruled the world, career advisors would tell some people to work for the postal service
EA thinking is thinking on the margin. When EAs prioritise causes, they are prioritising causes given the fact that they only control their one career, or, sometimes, given that they have some influence over a community of a few thousand people, and the distribution of some millions or billions of dollars.
Some critiques of EA act as if statements about cause prioritisation are absolute rather than relative. I.e. that EAs are saying that literally everyone should be working on AI Safety, or, the flipside, that EAs are saying that no one should be working on [insert a problem which is pressing, but not among the most urgent to commit the next million dollars to].
In conversations that sound like this, Iâve often turned to the idea that if EAs controlled all the resources in the world, career advisors at the hypothetical world governmentâs version of 80,000 Hours would be advising some people to be⊠postal workers. Given that the EA world government will have long ago filled the current areas of direct EA work, it could be the single most impactful thing a person could do with their skillset, given the comparative neglectedness of work in the postal service.
In this world some people would also be told that the best thing they could do is to work on [insert a problem which is pressing, but not among the most urgent to commit the next million dollars to in our current world].
Itâs basically just a fun thought experiment to make the point that EAs are not advising the whole worldâs resources, and if they were, they wouldnât (and shouldnât) argue for neglecting everything except for the current top EA causes.
I like the main point youâre making.
However, I think âthe governmentâs version of 80,000 Hoursâ is a very command-economy vision. Command economies have a terrible track record, and if there were such a thing as an âEA world governmentâ (which I would have many questions about regardless) I would strongly think it shouldnât try to plan and direct everyoneâs individual careers, and should instead leverage market forces like ~all successful large economies.
Lol yep thatâs fair. This is surprisingly never the direction the conversation has gone after Iâve shared this thought experiment.
Maybe it should be more like: in a world where resources are allocated according to EA priorities (allocation method- silent), 80,000 Hours would be likelier to tell someone to be a post officer than an AI safety researcher⊠Bit less catchy though.
Yeah, totally a contextual call about how to make this point in any given conversation, it can be easy to get bogged down with irrelevant context.
I do think itâs true that utilitarian thought tends to push one towards centralization and central planning, despite the bad track record here. Itâs worth engaging with thoughtful critiques of EA vibes on this front.
Salaries are the most basic way our economy does allocation, and one possible âEA government utopiaâ scenario is one where the government corrects market inefficiencies such that salaries perfectly track âvalue added to the world.â This is deeply sci-fi of course, but hey why not dream. In such a utopia world, if we really did reach the point where marginal safety researchers are not adding more value than marginal post office workers, salaries would.
I like this. It makes me think of how these people working in typical EA jobs wouldnât be doing much at all without the people working in food, water, health, transportation, and other stuff that makes life livable, work workable; and if the number of those support workers suddenly fell to the number of direct workers, EA recruiters would focus on nothing but restoring the number of farmers, bus drivers, and healthy-world doctors. đ
This frames EA the right way, marginal impact, not universal prescriptions. Career advice depends on what is already saturated and what is neglected. In a world where top EA orgs are fully staffed, the highest impact move for some people could be stability roles that keep systems running. That does not downgrade prestige, it reflects comparative advantage. The mistake critics make is treating EA advice as moral commands for everyone, instead of conditional guidance based on constraints and context.
Thatâs a catchy tagline, I might have to start using it :) Thanks!
From a utilitarian perspective, the priority of altruistic action should be that which can simultaneously reduce immediate human suffering and expand altruistic behavior (proselytizing a lifestyle, since the basis of altruism is altruistic motivation).
I disagree with this perspective, but I am also confused on the downvotes as the comment seems academic and politeâshouldnât disagreements be expressed through the X? Or did I miss some criteria on down/âupvotes?
I did not downvote myself, but to me the comment from idea21 seems off-topic to the post itself which is a (not very strong) negative.