Thanks for writing this up! I agree that it could be a big win if general EA ideas besides cause prioritization (or the idea of scope-limited cause prioritization) spread to the point of being as widely accepted as environmentalism. Some alternatives to this proposal though:
It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded—less ambiguously so than with narrow EA I think (see Carl’s comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I’m unsure how difficult this would be.
Both of these alternatives seem to have what is (to me) an advantage: they don’t involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.
FWIW, I think I would be much more excited to evangelize broad low-level EA memes if there were some strong alternative channel to distinguish cause-neutral, super intense/obsessive EAs. Science has a very explicit distinction between science fans and scientists, and a very explicit funnel from one to the other (several years of formal education). EA doesn’t have that yet, and may never. My instinct is that we should work on building a really really great “product”, then build high and publicly-recognized walls around “practitioners” and “consumers” (a practical division of labor rather than a moral high ground thing), and then market the product hard to consumers.
It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded—less ambiguously so than with narrow EA I think (see Carl’s comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
I agree with the “lack of obvious low-hanging fruit”. It doesn’t actually seem obvious to me how useful these concepts are to people in general, as opposed to more specific concrete advice (such as specific exercises for improving their social skills etc.). In particular, Less Wrong has been devoted to roughly this kind of thing, and even among LW regulars who may have spent hundreds of hours participating on the site, it’s always been controversial whether the concepts they’ve learned from the site have translated into any major life gains. My current inclination would be that “general thinking skills” just aren’t very useful for dealing with your practical life, and that concrete domain-specific ideas are much more useful.
You said that people in general care much more about concrete things in their own lives than their preferred altruistic causes, and I agree with this. But on the other hand, the kinds of people who are already committed to working on some altruistic cause are probably a different case: if you’re already devoted to some specific goal, then you might have more of an interest in applying those things. If you first targeted people working in existing organizations and won them over to using these ideas, then they might start teaching the ideas to all of their future hires, and over time the concepts could start to spread to the general population more.
Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I’m unsure how difficult this would be.
Maybe. One problem here is that some of these correlate only very loosely with EA: a lot of people have completed math education who aren’t EAs. And I think that another problem is that in order to really internalize an idea, you need to actively use it. My thinking here is similar to Venkatesh Rao’s, who wrote:
Strong views represent a kind of high sunk cost. When you have invested a lot of effort forming habits, and beliefs justifying those habits, shifting a view involves more than just accepting a new set of beliefs. You have to:
Learn new habits based on the new view
Learn new patterns of thinking within the new view
The order is very important. I have never met anybody who has changed their reasoning first and their habits second. You change your habits first. This is a behavioral conditioning problem largely unrelated to the logical structure and content of the behavior. Once you’ve done that, you learn the new conscious analysis and synthesis patterns.
This is why I would never attempt to debate a literal creationist. If forced to attempt to convert one, I’d try to get them to learn innocuous habits whose effectiveness depends on evolutionary principles (the simplest thing I can think of is A/B testing; once you learn that they work, and then understand how and why they work, you’re on a slippery slope towards understanding things like genetic algorithms, and from there to an appreciation of the power of evolutionary processes).
I wouldn’t know how to spread something like cosmopolitanism, to a large extent because I don’t know how to teach the kind of thinking habits that would cause you to internalize cosmopolitanism. And even after that, there would still be the step of getting from all of those prerequisites to applying EA principles in concepts. In contrast, teaching EA concepts by getting people to apply them to a charitable field they already care about, gets them into applying EA-ish thinking habits directly.
Both of these alternatives seem to have what is (to me) an advantage: they don’t involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.
That’s an interesting view, which I hadn’t considered. I might view it more as a disadvantage, in that in the model that I was thinking of, people who got into low-level EA would almost automatically also be exposed to high-level EA, causing the idea of high-level EA to spread further. If you were only teaching related concepts, that jump from them to high-level EA wouldn’t happen automatically, but would require some additional steps. (That said, if you could teach enough of those prerequisites, maybe the jump would be relatively automatic. But this seems challenging for the reasons I’ve mentioned above.)
I want to suggest a more general version of Ajeya’s views which is:
If someone did want to put time and effort into creating the resources to promote something akin to “broad effective altruism” they could focus their effort in two ways:
on research and advocacy that does not add to (and possibly detracts attention from) the “narrow effective altruism” movement.
on research and advocacy that benefits the effective altruism movement.
EXAMPLES
Eg. Researching what is the best arts charity in the UK.
Not useful as it is very unlikely that anyone who does take a cause neutral approach to charity would want to give to a UK arts charity.
There is a risk of misleading, for example if you google effective altruism and a bunch of materials on UK arts comes up first.
Eg. Researching general principles of how to evaluate charities. Researching climate change solutions. Researching systemic change charities.
These would all expand the scope of EA research and writings, might produce plausible candidates for the best charity/cause, and at the same time act to attract more people into the movement.
Consider climate change. It is a problem that at some point this century humanity has to solve (unlike UK arts) and it is also a cause many non-EAs care about strongly
CONCLUSION
So if there was at least some effort put into any “broad effective altruism” expansion I would strongly recommend starting with finding ways to expand the movement that are simultaneously useful areas for us to be considering in more detail.
(That said, FWIW I am very wary of attempts to expanding to have a “broad effective altruism” for some of the reasons mentioned by others)
Views my own, not my employers.
Thanks for writing this up! I agree that it could be a big win if general EA ideas besides cause prioritization (or the idea of scope-limited cause prioritization) spread to the point of being as widely accepted as environmentalism. Some alternatives to this proposal though:
It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded—less ambiguously so than with narrow EA I think (see Carl’s comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I’m unsure how difficult this would be.
Both of these alternatives seem to have what is (to me) an advantage: they don’t involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.
FWIW, I think I would be much more excited to evangelize broad low-level EA memes if there were some strong alternative channel to distinguish cause-neutral, super intense/obsessive EAs. Science has a very explicit distinction between science fans and scientists, and a very explicit funnel from one to the other (several years of formal education). EA doesn’t have that yet, and may never. My instinct is that we should work on building a really really great “product”, then build high and publicly-recognized walls around “practitioners” and “consumers” (a practical division of labor rather than a moral high ground thing), and then market the product hard to consumers.
Thanks for the comment!
I agree with the “lack of obvious low-hanging fruit”. It doesn’t actually seem obvious to me how useful these concepts are to people in general, as opposed to more specific concrete advice (such as specific exercises for improving their social skills etc.). In particular, Less Wrong has been devoted to roughly this kind of thing, and even among LW regulars who may have spent hundreds of hours participating on the site, it’s always been controversial whether the concepts they’ve learned from the site have translated into any major life gains. My current inclination would be that “general thinking skills” just aren’t very useful for dealing with your practical life, and that concrete domain-specific ideas are much more useful.
You said that people in general care much more about concrete things in their own lives than their preferred altruistic causes, and I agree with this. But on the other hand, the kinds of people who are already committed to working on some altruistic cause are probably a different case: if you’re already devoted to some specific goal, then you might have more of an interest in applying those things. If you first targeted people working in existing organizations and won them over to using these ideas, then they might start teaching the ideas to all of their future hires, and over time the concepts could start to spread to the general population more.
Maybe. One problem here is that some of these correlate only very loosely with EA: a lot of people have completed math education who aren’t EAs. And I think that another problem is that in order to really internalize an idea, you need to actively use it. My thinking here is similar to Venkatesh Rao’s, who wrote:
I wouldn’t know how to spread something like cosmopolitanism, to a large extent because I don’t know how to teach the kind of thinking habits that would cause you to internalize cosmopolitanism. And even after that, there would still be the step of getting from all of those prerequisites to applying EA principles in concepts. In contrast, teaching EA concepts by getting people to apply them to a charitable field they already care about, gets them into applying EA-ish thinking habits directly.
That’s an interesting view, which I hadn’t considered. I might view it more as a disadvantage, in that in the model that I was thinking of, people who got into low-level EA would almost automatically also be exposed to high-level EA, causing the idea of high-level EA to spread further. If you were only teaching related concepts, that jump from them to high-level EA wouldn’t happen automatically, but would require some additional steps. (That said, if you could teach enough of those prerequisites, maybe the jump would be relatively automatic. But this seems challenging for the reasons I’ve mentioned above.)
I want to suggest a more general version of Ajeya’s views which is:
If someone did want to put time and effort into creating the resources to promote something akin to “broad effective altruism” they could focus their effort in two ways:
on research and advocacy that does not add to (and possibly detracts attention from) the “narrow effective altruism” movement.
on research and advocacy that benefits the effective altruism movement.
EXAMPLES
Eg. Researching what is the best arts charity in the UK. Not useful as it is very unlikely that anyone who does take a cause neutral approach to charity would want to give to a UK arts charity. There is a risk of misleading, for example if you google effective altruism and a bunch of materials on UK arts comes up first.
Eg. Researching general principles of how to evaluate charities. Researching climate change solutions. Researching systemic change charities. These would all expand the scope of EA research and writings, might produce plausible candidates for the best charity/cause, and at the same time act to attract more people into the movement. Consider climate change. It is a problem that at some point this century humanity has to solve (unlike UK arts) and it is also a cause many non-EAs care about strongly
CONCLUSION
So if there was at least some effort put into any “broad effective altruism” expansion I would strongly recommend starting with finding ways to expand the movement that are simultaneously useful areas for us to be considering in more detail.
(That said, FWIW I am very wary of attempts to expanding to have a “broad effective altruism” for some of the reasons mentioned by others)