Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Hmm. On the one hand I think these are all useful topics for an EA to know. But I don’t think it’s necessary for all EAs to know these things. I think there’s a lot of EAs who don’t have this technical knowledge, but are happy to outsource decisions relying on this knowledge (such as where to donate) to people who do. That said, I think that often leads to donating less-than-effectively (e.g. giving to whatever EA Fund appeals to you personally, rather than rationally thinking about trade-offs/probabilistic outcomes).
I guess this is, in part, a big-tent vs. elite EA trade-off question. If EA is best as an elite movement, it makes sense that all the members should have this knowledge. But if we want to take an “everyone has a place in EA” approach, then it might not make sense to have a central curriculum.
Also, I don’t think we want everyone in EA to have the same skillset. EA isn’t, in my view, a single professional field, but perhaps more like a company (although this is probably an oversimplification). If a company gave all of their employees a handbook on How to Be A Great Project Manager, it’d be helpful… for project managers. But the rest of the team ought to be rounding out skills that others in the company don’t have that suit their comparative advantage and will move the company forward. The only thing everyone at the company really needs to know is the product. Basic time management / other soft skills are also useful. I don’t think we need 100% of EAs to have a solid grounding in economics. Maybe we need ~100% of EAs to trust economics. But I’d rather have some EAs focusing on building skills like movement-building, communications, fundraising, operations/management, entrepreneurship, policy, qualitative research, etc.
Granted, I’m thinking about this from the perspective of careers, rather than being able to participate in discussions in EA spaces. To answer to that aspect of it—although I certainly think it’s possible to discuss EA without knowing about economics / statistics / decision analysis knowledge, the conversation does sometimes go in this more technical direction and leave newcomers behind. The question, then, might be whether it’s the newcomers who should hold the responsibility of learning this so that they can participate in these discussions, or if the people who are discussing things at such a technical level should adjust the way they discuss these issues to make them more accessible to a non-technical audience. I lean more towards the latter (though it depends on the context).
I agree with Marisa
Rather than a single body of knowledge being a standard education for EAs, I like the fellowship structure that many EA Uni groups use.
For me, one of the main goals in running these fellowship to expose students to enough EA ideas and discussions to decide for themselves what knowledge and skills they want to build up in order to do good better. For some people, this will involve economics, statistics, and decision analysis knowledge, but for others, it will look totally different.
(For fellowship syllabus examples you can check out this Intro Fellowship I’m running at Brown EA, and this In-Depth Fellowship run by EA Oxford).
I do think some ethics is a must, not necessarily to be prescriptive, but to challenge people’s views and introduce alternatives so they don’t get stuck with something they would not endorse if they knew more. Some topics I’d recommend:
Population axiology: total utilitarianism, the repugnant conclusion (and other similar results), person-affecting views, negative utilitarianism, average utilitarianism, prioritarianism, egalitarianism, lives vs headaches, dust specks vs torture. Hilary Greaves wrote a survey on population axiology.
Welfare/wellbeing: hedonistic, desire-based, objective list theory. Symmetric vs asymmetric/antifrustrationist/negative/suffering-focused views. See the SEP article.
Impartiality, just and unjust discrimination, equal consideration of interests, speciesism. Moral subjects/patients vs moral agents, sentience/consciousness.
Free will, personal identity, death, moral luck, moral responsibility.
Objections and alternatives to consequentialism.
Metaethics, realism vs anti-realism, moral uncertainty.
The Stanford Encyclopedia of Philosophy articles are usually good, and sometimes the Wikipedia articles are, too.
This stuff is interesting to think about. There have been EA courses before. There could one-day be a textbook for effective altruism. There could be a successor to the RSP that offers a degree. Similar stories for “global prioritisation”, “macrostrategy”, and “AI safety”.
Julia Wise provided a list of EA syllabi and teaching materials here.
(RSP = Future of Humanity Institute’s Research Scholars Programme.)
There are also the reading lists recently put together by Richard Ngo.
Great question! I hope to find time to engage substantively later, but for now I just wanted to flag that I’m considering to spend significant time from September or October putting together some kind of “EA curriculum”, and that I’d be happy to talk to anyone interested in similar ideas. Send me a PM if you want to jump on a call in the next couple of weeks.
Hi Max,
I’m curious how big you are thinking this “EA curriculum” might be. Are you thinking of something similar to an EA Uni group fellowship (usually ~4 hours/ week for ~ 8 weeks) or are you thinking of something much larger?
I was mostly thinking of a curriculum that would eventually be much larger (though could be modular, and certainly would have a smaller MVP as first step to gauge viability of the larger curriculum).
But my views on this aren’t firm, and in general one of the first things I’ll do is to determine various fundamental properties I don’t feel certain about yet. Other than length these are e.g. target audience and intended outcomes (e.g. attracting new people to EA, “onboarding” new EAs, bringing moderately experienced EA to the same level, or allowing even quite involved EAs to learn something new by increasing the amount of content that publicly accessible as opposed to in some people’s minds or nonpublic docs), scope (e.g. only longtermism?), and focus on content/knowledge vs. skills/methods.
Sounds very exciting!
And seems like there is some overlap with EA Uni group fellowships so I would be happy to talk to you about those if you want; although maybe better to talk to the community builders more involved in syllabus writing. ( this Intro Fellowship I’m running at Brown EA )
I remember commenting to an economist friend a few months ago that economists have generally much better ethics than philosophers, precisely because of their consistent application of utilitarianism followed by moving on to the interesting questions, as opposed to philosophers wanting to debate ethics to death. So I concur with the decision to include economics over philosophy.