This isnât much more than a rotation (or maybe just a rephrasing), but:
When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like âusing evidence and reason to do the most goodâ, or âtrying to find the best things to do, then doing themâ are things I can imagine the typical person nodding along with, but then wondering what the fuss is about (âSure, Iâm also a fan of doing more good rather than less goodâarenât we all?â) I feel I need to elaborate with a distinctive example (e.g. âI left clinical practice because I did some amateur health econ on how much good a doctor does, and thought I could make a greater contribution elsewhereâ) for someone to get a good sense of what I am driving at.
I think a related problem is the âthinâ version of EA can seem slippery when engaging with those who object to it. âIf indeed intervention Y was the best thing to do, we would of course support intervention Yâ may (hopefully!) be true, but is seldom the heart of the issue. I take most common objections are not against the principle but the application (I also suspect this may inadvertently annoy an objector, given this reply can paint them asâbizarrely - âpreferring less good to more goodâ).
My best try at what makes EA distinctive is a summary of what you spell out with spread, identifiability, etc: that there are very large returns to reason for beneficence (maybe âdeliberationâ instead of âreasonâ, or whatever). I think the typical person does âuse reason and evidence to do the most goodâ, and can be said to be doing some sort of search for the best actions. I think the core of EA (at least the âEâ bit) is the appeal that people should do a lot more of this than they would otherwiseâas, if they do, their beneficence would tend to accomplish much more.
Per OP, motivating this is easier said than done. The best case is for global health, as there is a lot more (common sense) evidence one can point to about some things being a lot better than others, and these object level matters a hypothetical interlocutor is fairly likely to accept also offers support for the âreturns to reasonâ story. For most other cause areas, the motivating reasons are typically controversial, and the (common sense) evidence is scant-to-absent. Perhaps the best moves are here would be pointing to these as salient considerations which plausibly could dramatically change ones priorities, and so exploring to uncover these is better than exploiting after more limited deliberation (but cf. cluelessness).
On âlarge returns to reasonâ: My favorite general-purpose example of this is to talk about looking for a good charity, and then realizing how much better the really good charities were than others I had supported. I bring up real examples of where I donated before and after discovering EA, with a few rough numbers to show how much better I think Iâm now doing on the metric I care about (âamount that people are helpedâ).
I like this approach because it frames EA as something that can help a person make a common decisionââwhich charity to support?â or âshould I support charity X?ââbut without painting them as ignorant or preferring less good (in these conversations, I acknowledge that most people donât think much about decisions like this, and that not thinking much is reasonable given that they donât know how huge the differences in effectiveness can be).
I agree that when introducing EA to someone for the first, itâs often better to lead with a âthickâ version, and then bring in thin later.
(I should have maybe better clarified that my aim wasnât to provide a new popular introduction, but rather to better clarify what âthinâ EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.)
I also agree that many objections are about EA in practice rather than the âthinâ core ideas, and that it can be annoying to retreat back to thin EA, and that itâs often better to start by responding to the objections to thick. Still, I think it would be ideal if more people understood the thin/âthick distinction (I could imagine more objections starting with âI agree we should try to find the highest-impact actions, but I disagree with the current priorities of the community because...), so I think itâs worth making some efforts in that direction.
This isnât much more than a rotation (or maybe just a rephrasing), but:
When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like âusing evidence and reason to do the most goodâ, or âtrying to find the best things to do, then doing themâ are things I can imagine the typical person nodding along with, but then wondering what the fuss is about (âSure, Iâm also a fan of doing more good rather than less goodâarenât we all?â) I feel I need to elaborate with a distinctive example (e.g. âI left clinical practice because I did some amateur health econ on how much good a doctor does, and thought I could make a greater contribution elsewhereâ) for someone to get a good sense of what I am driving at.
I think a related problem is the âthinâ version of EA can seem slippery when engaging with those who object to it. âIf indeed intervention Y was the best thing to do, we would of course support intervention Yâ may (hopefully!) be true, but is seldom the heart of the issue. I take most common objections are not against the principle but the application (I also suspect this may inadvertently annoy an objector, given this reply can paint them asâbizarrely - âpreferring less good to more goodâ).
My best try at what makes EA distinctive is a summary of what you spell out with spread, identifiability, etc: that there are very large returns to reason for beneficence (maybe âdeliberationâ instead of âreasonâ, or whatever). I think the typical person does âuse reason and evidence to do the most goodâ, and can be said to be doing some sort of search for the best actions. I think the core of EA (at least the âEâ bit) is the appeal that people should do a lot more of this than they would otherwiseâas, if they do, their beneficence would tend to accomplish much more.
Per OP, motivating this is easier said than done. The best case is for global health, as there is a lot more (common sense) evidence one can point to about some things being a lot better than others, and these object level matters a hypothetical interlocutor is fairly likely to accept also offers support for the âreturns to reasonâ story. For most other cause areas, the motivating reasons are typically controversial, and the (common sense) evidence is scant-to-absent. Perhaps the best moves are here would be pointing to these as salient considerations which plausibly could dramatically change ones priorities, and so exploring to uncover these is better than exploiting after more limited deliberation (but cf. cluelessness).
On âlarge returns to reasonâ: My favorite general-purpose example of this is to talk about looking for a good charity, and then realizing how much better the really good charities were than others I had supported. I bring up real examples of where I donated before and after discovering EA, with a few rough numbers to show how much better I think Iâm now doing on the metric I care about (âamount that people are helpedâ).
I like this approach because it frames EA as something that can help a person make a common decisionââwhich charity to support?â or âshould I support charity X?ââbut without painting them as ignorant or preferring less good (in these conversations, I acknowledge that most people donât think much about decisions like this, and that not thinking much is reasonable given that they donât know how huge the differences in effectiveness can be).
Hi Greg,
I agree that when introducing EA to someone for the first, itâs often better to lead with a âthickâ version, and then bring in thin later.
(I should have maybe better clarified that my aim wasnât to provide a new popular introduction, but rather to better clarify what âthinâ EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.)
I also agree that many objections are about EA in practice rather than the âthinâ core ideas, and that it can be annoying to retreat back to thin EA, and that itâs often better to start by responding to the objections to thick. Still, I think it would be ideal if more people understood the thin/âthick distinction (I could imagine more objections starting with âI agree we should try to find the highest-impact actions, but I disagree with the current priorities of the community because...), so I think itâs worth making some efforts in that direction.
Thanks for the other thoughts!