I think this is a good post for starting conversation but it has also received a lot of substantial criticism here and on LessWrong. To sum up my favourites:
Sharing an unorthodox belief with someone makes them much more likely to listen to you on other topics.
Weirdness points just alter how much attention people pay to you, and if you only want a subset of the population to pay attention to you, then you are likely to be prepared to spend more weirdness points. i.e. there’s not a fixed amount.
I think each of these is a substantial objection to the thesis of the article and taken together they are pretty damaging.
I think that they can be tied together to be even stronger though.
If you’re trying to spread a radical intellectual idea, in the beginning, you mostly want to target early adopters who have a lot of pulling power. Think of the Einstein-Russell manifesto against nuclear proliferation, which preceded the antinuclear movement by a couple of decades, or of many other social movements whose intellectual forebears who preceded them. So far, the most valuable contributors to ideas relating to effective altruism are people like Martin Rees, Jaan Tallin, James Martin, Elon Musk. To a substantial extent, they appraise ideas based on their content, rather than on the basis of who is presenting them. As publisher John Brockman puts it, “I’m not interested in the average person. Most people have never had an idea and wouldn’t recognize one if did.” Or, as Peter Thiel says: “What is something you believe that nearly no one agrees with you on?” For these people, holding some unorthodox ideas is a prerequisite for them to listen to you. So for people who are promoting a radical idea to others who are themselves radical thinkers, trying to be normal is just not going to cut ice.
Not everyone has to be Will MacAskill or Peter Singer. These two are writing popular books about effective altruism and do lots of TV interviews about these things. They practice talking to the masses so that others can focus on industrial and intellectual leaders, and yet others can focus on researching which charities are the most effective. The problem with this post is that it does not apply to others’ roles the same as it applies to MacAskill’s and Singer’s. But rather, the whole spectrum of jobs are important. Arguably, MacAskill and Singer’s roles are less important, because if we can’t attract the general public, then we can go to high-net worth individuals. However, if we can’t figure out which charities are the most effective, then recruiting is no use at all.
So even if we convince ourselves that it’s more important to do fundamental research and talking to intellectual leaders while avoiding public discourse, it’s very hard to stave off our human need for social approval. The bottom line, though, is that people don’t need a post to tell them to be more aware of social rejection. Most people couldn’t forget about social rejection if they tried. In fact, some of the people who gain the most confidence and passion in making a big change to the world do so because they realise that society’s big problems are not going to be fixed by themselves. Take Nate Soares for one reasonably articulate example. This can be because of a realisation that society is pretty selfish or crazy a lot of the time, and so some people are going to have to take on the responsibility of improving it. Instead, trying to get across the idea that society is weird might be a better approach to breeding people with a sense of heroic responsibility for fixing it.
So altogether, although I agree with the post broadly—that it’s valuable to be familiar and normal, I think people already behave as though it is. Instead, we should pay more attention to the fact that effective altruism is quite a radical idea, that it already has some people who are focussed on acting as its public face, and that it might be better for people to focus on feeling responsible for improving the world, or for figuring out how, before becoming so alert to social considerations.
I think all those points are correct, but I view them more as expanding nuances rather than direct counterarguments. That is, one can re-construct a version of my thesis that remains agreeable—it would just be restricted to the domain of advocacy (which I did intend but failed to state) and would admit nuanced ways in which such points can be “earned” in addition to being spent.
-
So far, the most valuable contributors to ideas relating to effective altruism are people like Martin Rees, Jaan Tallin, James Martin, Elon Musk.
I think of the use case of this essay as the typical EA person trying to hold conversations with their friends about EA / rationalist / etc. topics. Presumably, this typical EA person does care to be an advocate, at least in part—that is, all else being equal, this person would prefer their friends to adopt their EA / rationalist / etc. ideas.
For us typical EA people, we’re restrained by having a more constrained budget of points in a way that Rees, Tallin, etc. don’t because they’re already famous and have proven themselves for having immense accomplishments that are already verifiable and can then—cue halo effect—be openly weird and have people think “hmm, maybe they’re right”.
So I think the case where one wants to advocate comes up pretty often, and the advice here applies to that for non-famous people.
Hey Peter. I think it’s good to point out a main agreement that we have. I agree that changing the way you dress and look can be a low-cost and useful way to improve your ability to do advocacy. The area where I think you’re missing the mark is with regard to changing your beliefs in order to advocate. If in the process of making your ideas more palatable to others, you lose track of what’s true, then your advocacy might end up unhelpful or harmful.
I think you’re missing the point about why I mentioned VIPs. It was to argue that being direct and honestly truth-seeking might be a better way of attracting people who have a larger than average capacity to influence the world. I was arguing that attracting people who have a larger than average capacity to influence the world is likely to be more useful than trying to influence your friends. I’m not clear which, if either of these, you disagree with. I wasn’t trying to say that we should be weird in order to copy VIPs, because I agree that they’re in a different situation.
For most people, looking good when they introduce their friends to effective altruism is not a neglected problem. Lots of effective altruists, like any other people, can improve their social skills and tact, but it’s rare for people not to be thinking about social approval at all. Arguably, excessive conformity of opinion is one of the world’s big problems—for instance, people can collectively numb themselves to big problems in the world like poverty and existential risk.
Indeed, I think the biggest challenge in terms of spreading EA is what I call “extended responsibility.” Many people have difficulty taking responsibility for their own lives, let alone their family or community. EA asks you to take responsibility for the whole world, and then carry that responsibility with you for your whole life. Holy crap.
After that, the next big ask is for rational methodologies. Even if people take responsibility for their kids, they probably will rely on intuition and availability heuristics.
So discussion around EA advocacy (which is what I believe to be the topic here) could be better focused around “how to move people towards extended responsibility and rational methodologies”.
Of course, that could seem like a soft approach that doesn’t immediately get donations to GiveDirectly. Some of the strategies I outlined in my other comment can be used in an instance where you’d like to hard sell.
Perhaps I’m not thinking this through or I’m simply being unambitious but I don’t view effective altruism as asking you to take responsibility for the whole world. I certainly don’t feel an enormous weight on my shoulders. I view it more as taking responsibility for the difference between what you would ordinarily do and what you could do if you maximised your impact, which does admittedly require consideration of the whole world.
If valid, maybe that can make effective altruism a little more palatable.
I think this is a good post for starting conversation but it has also received a lot of substantial criticism here and on LessWrong. To sum up my favourites:
Sharing an unorthodox belief with someone makes them much more likely to listen to you on other topics.
Weirdness matters a lot if you’re an advocate, but much less otherwise
Being right is different from being normal. People who care a lot about being normal are rarely the best at judging what’s true, or contributing to those [discussions] (http://lesswrong.com/lw/lbn/you_have_a_set_amount_of_weirdness_points_spend/bocl). And it’s probably more important to figure out what’s true than to persuade people.
Talking about weirdness points is a very negative perspective that could play on people’s fear of social rejection. It’s probably better to talk about familiarity points instead.
Weirdness points just alter how much attention people pay to you, and if you only want a subset of the population to pay attention to you, then you are likely to be prepared to spend more weirdness points. i.e. there’s not a fixed amount.
I think each of these is a substantial objection to the thesis of the article and taken together they are pretty damaging.
I think that they can be tied together to be even stronger though.
If you’re trying to spread a radical intellectual idea, in the beginning, you mostly want to target early adopters who have a lot of pulling power. Think of the Einstein-Russell manifesto against nuclear proliferation, which preceded the antinuclear movement by a couple of decades, or of many other social movements whose intellectual forebears who preceded them. So far, the most valuable contributors to ideas relating to effective altruism are people like Martin Rees, Jaan Tallin, James Martin, Elon Musk. To a substantial extent, they appraise ideas based on their content, rather than on the basis of who is presenting them. As publisher John Brockman puts it, “I’m not interested in the average person. Most people have never had an idea and wouldn’t recognize one if did.” Or, as Peter Thiel says: “What is something you believe that nearly no one agrees with you on?” For these people, holding some unorthodox ideas is a prerequisite for them to listen to you. So for people who are promoting a radical idea to others who are themselves radical thinkers, trying to be normal is just not going to cut ice.
Not everyone has to be Will MacAskill or Peter Singer. These two are writing popular books about effective altruism and do lots of TV interviews about these things. They practice talking to the masses so that others can focus on industrial and intellectual leaders, and yet others can focus on researching which charities are the most effective. The problem with this post is that it does not apply to others’ roles the same as it applies to MacAskill’s and Singer’s. But rather, the whole spectrum of jobs are important. Arguably, MacAskill and Singer’s roles are less important, because if we can’t attract the general public, then we can go to high-net worth individuals. However, if we can’t figure out which charities are the most effective, then recruiting is no use at all.
So even if we convince ourselves that it’s more important to do fundamental research and talking to intellectual leaders while avoiding public discourse, it’s very hard to stave off our human need for social approval. The bottom line, though, is that people don’t need a post to tell them to be more aware of social rejection. Most people couldn’t forget about social rejection if they tried. In fact, some of the people who gain the most confidence and passion in making a big change to the world do so because they realise that society’s big problems are not going to be fixed by themselves. Take Nate Soares for one reasonably articulate example. This can be because of a realisation that society is pretty selfish or crazy a lot of the time, and so some people are going to have to take on the responsibility of improving it. Instead, trying to get across the idea that society is weird might be a better approach to breeding people with a sense of heroic responsibility for fixing it.
So altogether, although I agree with the post broadly—that it’s valuable to be familiar and normal, I think people already behave as though it is. Instead, we should pay more attention to the fact that effective altruism is quite a radical idea, that it already has some people who are focussed on acting as its public face, and that it might be better for people to focus on feeling responsible for improving the world, or for figuring out how, before becoming so alert to social considerations.
I think all those points are correct, but I view them more as expanding nuances rather than direct counterarguments. That is, one can re-construct a version of my thesis that remains agreeable—it would just be restricted to the domain of advocacy (which I did intend but failed to state) and would admit nuanced ways in which such points can be “earned” in addition to being spent.
-
I think of the use case of this essay as the typical EA person trying to hold conversations with their friends about EA / rationalist / etc. topics. Presumably, this typical EA person does care to be an advocate, at least in part—that is, all else being equal, this person would prefer their friends to adopt their EA / rationalist / etc. ideas.
For us typical EA people, we’re restrained by having a more constrained budget of points in a way that Rees, Tallin, etc. don’t because they’re already famous and have proven themselves for having immense accomplishments that are already verifiable and can then—cue halo effect—be openly weird and have people think “hmm, maybe they’re right”.
So I think the case where one wants to advocate comes up pretty often, and the advice here applies to that for non-famous people.
Hey Peter. I think it’s good to point out a main agreement that we have. I agree that changing the way you dress and look can be a low-cost and useful way to improve your ability to do advocacy. The area where I think you’re missing the mark is with regard to changing your beliefs in order to advocate. If in the process of making your ideas more palatable to others, you lose track of what’s true, then your advocacy might end up unhelpful or harmful.
I think you’re missing the point about why I mentioned VIPs. It was to argue that being direct and honestly truth-seeking might be a better way of attracting people who have a larger than average capacity to influence the world. I was arguing that attracting people who have a larger than average capacity to influence the world is likely to be more useful than trying to influence your friends. I’m not clear which, if either of these, you disagree with. I wasn’t trying to say that we should be weird in order to copy VIPs, because I agree that they’re in a different situation.
For most people, looking good when they introduce their friends to effective altruism is not a neglected problem. Lots of effective altruists, like any other people, can improve their social skills and tact, but it’s rare for people not to be thinking about social approval at all. Arguably, excessive conformity of opinion is one of the world’s big problems—for instance, people can collectively numb themselves to big problems in the world like poverty and existential risk.
I could duplicate arguments for wariness of social biases, but instead it seems better to just link Eliezer, who’s been pretty lucid on this topic: Asch’s Conformity Experiment, On Expressing Your Concerns.
Nice summary Ryan.
Indeed, I think the biggest challenge in terms of spreading EA is what I call “extended responsibility.” Many people have difficulty taking responsibility for their own lives, let alone their family or community. EA asks you to take responsibility for the whole world, and then carry that responsibility with you for your whole life. Holy crap.
After that, the next big ask is for rational methodologies. Even if people take responsibility for their kids, they probably will rely on intuition and availability heuristics.
So discussion around EA advocacy (which is what I believe to be the topic here) could be better focused around “how to move people towards extended responsibility and rational methodologies”.
Of course, that could seem like a soft approach that doesn’t immediately get donations to GiveDirectly. Some of the strategies I outlined in my other comment can be used in an instance where you’d like to hard sell.
Perhaps I’m not thinking this through or I’m simply being unambitious but I don’t view effective altruism as asking you to take responsibility for the whole world. I certainly don’t feel an enormous weight on my shoulders. I view it more as taking responsibility for the difference between what you would ordinarily do and what you could do if you maximised your impact, which does admittedly require consideration of the whole world.
If valid, maybe that can make effective altruism a little more palatable.