I often run into the problem of EA coming up in casual conversation and not knowing exactly how to explain what it is, and I know many others run into this problem as well.
Not rigorously tested or peer-reviewed but this is an approach I’ve found works decently. The audience is a “normal person”.
My short casual pitch of EA:
“Effective altruism is about doing research to improve the effectiveness of philanthropy. Researchers can measure the effects of different interventions, like providing books versus providing malaria nets. GiveWell, an effective altruist charity evaluator, has identified a few high-impact interventions: malaria medicine and nets, vitamin A supplements, encouraging childhood vaccinations, and so on.”
If I have a couple more sentences to introduce a bit of longtermism:
“There is also a part of effective altruism which is concerned with preventing future catastrophes. Climate change is one well-known example. Another example is global catastrophic biological risks—as we saw with COVID-19, pandemics can cause a lot of harm, so effective altruists see research in biosecurity and pandemic prevention as highly effective. There is also the field of “AI Safety”, which is based on the premise that AI systems will become more prevalent in the future, so it is important we thoroughly research their capabilities before deploying them. The unifying theme here is a “longtermist” worldview—the idea that we can do good things now which will have positive effects on the far future.”
The ideas that make up this pitch are:
Start with broadly accepted premises (“AI systems will become more prevalent in the future”) before putting the EA spin on it (“so we need to do AI safety research”). This principle also applies to writing abstracts
Sacrifice precision in definitions of concepts for the sake of getting the intuitive idea across. For example, describing longtermism as “doing things which positively affect the future” does not perfectly capture the concept, but it’s an easier starting point than “future-people are just as morally relevant as present-people”.
These principles can similarly be applied to simply describe AI safety, animal welfare, etc.
When I say “repeating talking points”, I am thinking of:
Using cached phrases and not explaining where they come from.
Conversations which go like
EA: We need to think about expanding our moral circle, because animals may be morally relevant.
Non-EA: I don’t think animals are morally relevant though.
EA: OK, but if animals are morally relevant, then quadrillions of lives are at stake.
(2) is kind of a caricature as written, but I have witnessed conversations like these in EA spaces.
My evidence for this claim comes form my personal experience watching EAs talk to non-EAs, and listen to non-EAs talk about their perception of EA. The total number of data points in this pool is ~20. I would say that I don’t have exceptionally many EA contacts, compared to most EAs, but I do particularly make an effort to seek out social spaces where non-EAs are looking to learn about EA. Thinking back on these experiences, and what conversations went well and what ones didn’t, is what inspired me to write this short post.
Ultimately my anecdotal data can’t make any statistical statements about the EA community at large. The purpose of this post is to more describe two mental models of EA alignment and advocate for the “skill mastery” perspective.