LW server reports: not allowed.
This probably means the post has been deleted or moved back to the author's drafts.
I often run into the problem of EA coming up in casual conversation and not knowing exactly how to explain what it is, and I know many others run into this problem as well.
Not rigorously tested or peer-reviewed but this is an approach I’ve found works decently. The audience is a “normal person”.
My short casual pitch of EA:
“Effective altruism is about doing research to improve the effectiveness of philanthropy. Researchers can measure the effects of different interventions, like providing books versus providing malaria nets. GiveWell, an effective altruist charity evaluator, has identified a few high-impact interventions: malaria medicine and nets, vitamin A supplements, encouraging childhood vaccinations, and so on.”
If I have a couple more sentences to introduce a bit of longtermism:
“There is also a part of effective altruism which is concerned with preventing future catastrophes. Climate change is one well-known example. Another example is global catastrophic biological risks—as we saw with COVID-19, pandemics can cause a lot of harm, so effective altruists see research in biosecurity and pandemic prevention as highly effective. There is also the field of “AI Safety”, which is based on the premise that AI systems will become more prevalent in the future, so it is important we thoroughly research their capabilities before deploying them. The unifying theme here is a “longtermist” worldview—the idea that we can do good things now which will have positive effects on the far future.”
The ideas that make up this pitch are:
Start with broadly accepted premises (“AI systems will become more prevalent in the future”) before putting the EA spin on it (“so we need to do AI safety research”). This principle also applies to writing abstracts
Sacrifice precision in definitions of concepts for the sake of getting the intuitive idea across. For example, describing longtermism as “doing things which positively affect the future” does not perfectly capture the concept, but it’s an easier starting point than “future-people are just as morally relevant as present-people”.
These principles can similarly be applied to simply describe AI safety, animal welfare, etc.
Does the short causal pitch not run the risk of limiting EA’s scope too much to philanthropy? To me, it seems to miss the core of EA: figuring out how to better improve the world, given the resources we have.