It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded—less ambiguously so than with narrow EA I think (see Carl’s comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
I agree with the “lack of obvious low-hanging fruit”. It doesn’t actually seem obvious to me how useful these concepts are to people in general, as opposed to more specific concrete advice (such as specific exercises for improving their social skills etc.). In particular, Less Wrong has been devoted to roughly this kind of thing, and even among LW regulars who may have spent hundreds of hours participating on the site, it’s always been controversial whether the concepts they’ve learned from the site have translated into any major life gains. My current inclination would be that “general thinking skills” just aren’t very useful for dealing with your practical life, and that concrete domain-specific ideas are much more useful.
You said that people in general care much more about concrete things in their own lives than their preferred altruistic causes, and I agree with this. But on the other hand, the kinds of people who are already committed to working on some altruistic cause are probably a different case: if you’re already devoted to some specific goal, then you might have more of an interest in applying those things. If you first targeted people working in existing organizations and won them over to using these ideas, then they might start teaching the ideas to all of their future hires, and over time the concepts could start to spread to the general population more.
Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I’m unsure how difficult this would be.
Maybe. One problem here is that some of these correlate only very loosely with EA: a lot of people have completed math education who aren’t EAs. And I think that another problem is that in order to really internalize an idea, you need to actively use it. My thinking here is similar to Venkatesh Rao’s, who wrote:
Strong views represent a kind of high sunk cost. When you have invested a lot of effort forming habits, and beliefs justifying those habits, shifting a view involves more than just accepting a new set of beliefs. You have to:
Learn new habits based on the new view
Learn new patterns of thinking within the new view
The order is very important. I have never met anybody who has changed their reasoning first and their habits second. You change your habits first. This is a behavioral conditioning problem largely unrelated to the logical structure and content of the behavior. Once you’ve done that, you learn the new conscious analysis and synthesis patterns.
This is why I would never attempt to debate a literal creationist. If forced to attempt to convert one, I’d try to get them to learn innocuous habits whose effectiveness depends on evolutionary principles (the simplest thing I can think of is A/B testing; once you learn that they work, and then understand how and why they work, you’re on a slippery slope towards understanding things like genetic algorithms, and from there to an appreciation of the power of evolutionary processes).
I wouldn’t know how to spread something like cosmopolitanism, to a large extent because I don’t know how to teach the kind of thinking habits that would cause you to internalize cosmopolitanism. And even after that, there would still be the step of getting from all of those prerequisites to applying EA principles in concepts. In contrast, teaching EA concepts by getting people to apply them to a charitable field they already care about, gets them into applying EA-ish thinking habits directly.
Both of these alternatives seem to have what is (to me) an advantage: they don’t involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.
That’s an interesting view, which I hadn’t considered. I might view it more as a disadvantage, in that in the model that I was thinking of, people who got into low-level EA would almost automatically also be exposed to high-level EA, causing the idea of high-level EA to spread further. If you were only teaching related concepts, that jump from them to high-level EA wouldn’t happen automatically, but would require some additional steps. (That said, if you could teach enough of those prerequisites, maybe the jump would be relatively automatic. But this seems challenging for the reasons I’ve mentioned above.)
Thanks for the comment!
I agree with the “lack of obvious low-hanging fruit”. It doesn’t actually seem obvious to me how useful these concepts are to people in general, as opposed to more specific concrete advice (such as specific exercises for improving their social skills etc.). In particular, Less Wrong has been devoted to roughly this kind of thing, and even among LW regulars who may have spent hundreds of hours participating on the site, it’s always been controversial whether the concepts they’ve learned from the site have translated into any major life gains. My current inclination would be that “general thinking skills” just aren’t very useful for dealing with your practical life, and that concrete domain-specific ideas are much more useful.
You said that people in general care much more about concrete things in their own lives than their preferred altruistic causes, and I agree with this. But on the other hand, the kinds of people who are already committed to working on some altruistic cause are probably a different case: if you’re already devoted to some specific goal, then you might have more of an interest in applying those things. If you first targeted people working in existing organizations and won them over to using these ideas, then they might start teaching the ideas to all of their future hires, and over time the concepts could start to spread to the general population more.
Maybe. One problem here is that some of these correlate only very loosely with EA: a lot of people have completed math education who aren’t EAs. And I think that another problem is that in order to really internalize an idea, you need to actively use it. My thinking here is similar to Venkatesh Rao’s, who wrote:
I wouldn’t know how to spread something like cosmopolitanism, to a large extent because I don’t know how to teach the kind of thinking habits that would cause you to internalize cosmopolitanism. And even after that, there would still be the step of getting from all of those prerequisites to applying EA principles in concepts. In contrast, teaching EA concepts by getting people to apply them to a charitable field they already care about, gets them into applying EA-ish thinking habits directly.
That’s an interesting view, which I hadn’t considered. I might view it more as a disadvantage, in that in the model that I was thinking of, people who got into low-level EA would almost automatically also be exposed to high-level EA, causing the idea of high-level EA to spread further. If you were only teaching related concepts, that jump from them to high-level EA wouldn’t happen automatically, but would require some additional steps. (That said, if you could teach enough of those prerequisites, maybe the jump would be relatively automatic. But this seems challenging for the reasons I’ve mentioned above.)