I agree with a lot of these points. I just want to throw some counter-points out there for consideration. I’m not necessarily endorsing them, and don’t intend them as a direct response, but thought they might be interesting. It’s all very rough and quickly written.
1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they’re already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.
It becomes particularly difficult because the leaders, who do the broad outreach, want to focus on high-level EA. It’s more transparent and open to pitch high-level EA directly.
There are probably ways you could implement a division without incurring these problems, but it would need some careful thought.
2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It’s what makes us different from simply “competent” do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.
3) Often the best way to promote general ideas is to live them. With your example of promoting science, people often seem to think the Royal Society was important in building the scientific culture in the UK. It was an elite group of scientists who just got about the business of doing science. Early members included Newton and Boyle. The society brought likeminded people together, and helped them to be more successful, ultimately spreading the scientific mindset.
Another example is Y Combinator, which has helped to spread norms about how to run startups, encourage younger people to do them, reduce the power of VCs, and have other significant effects on the ecosystem. The partners often say they became famous and influential due to reddit → dropbox → airbnb, so much of their general impact was due to having a couple of concrete successes.
Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact—finding the “airbnb of EA” or the “Newton of EA”.
1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they’re already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.
Yeah, agreed. Though part of what I was trying to say is that, as you mentioned, we have the high/low distinction already—“implementing” that distinction would just be giving an explicit name to something that already exists. And something that has a name is easier to refer to and talk about, so having some set of terms for the two types could make it easier to be more transparent about the existence of the distinction when doing outreach. (This would be the case regardless of whether we want to expand EA to lower-impact causes or not.)
2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It’s what makes us different from simply “competent” do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.
I guess the question here is, how much would efforts to bring in low-level EAs hurt the efforts to bring in high-level EAs. My intuition would be that the net effect would be to bring in more high-level EAs overall (a smaller percentage of incoming people would become high-level EAs, but that would be offset by there being more incoming people overall), but I don’t have any firm support for that intuition and one would have to test it somehow.
3) Often the best way to promote general ideas is to live them. … Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact—finding the “airbnb of EA” or the “Newton of EA”.
I agree that the best way to promote general ideas can be to live them. But I think we need to be more specific about what a “huge impact” would mean in this context. E.g. High Impact Science suggests that Norman Borlaug is one of the people who have had the biggest positive impact on the world—but most people have probably never heard of him. So for spreading social norms, it’s not enough to live the ideas and make a big impact, one has to do it in a sufficiently visible way.
Hey Kaj,
I agree with a lot of these points. I just want to throw some counter-points out there for consideration. I’m not necessarily endorsing them, and don’t intend them as a direct response, but thought they might be interesting. It’s all very rough and quickly written.
1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they’re already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.
It becomes particularly difficult because the leaders, who do the broad outreach, want to focus on high-level EA. It’s more transparent and open to pitch high-level EA directly.
There are probably ways you could implement a division without incurring these problems, but it would need some careful thought.
2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It’s what makes us different from simply “competent” do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.
3) Often the best way to promote general ideas is to live them. With your example of promoting science, people often seem to think the Royal Society was important in building the scientific culture in the UK. It was an elite group of scientists who just got about the business of doing science. Early members included Newton and Boyle. The society brought likeminded people together, and helped them to be more successful, ultimately spreading the scientific mindset.
Another example is Y Combinator, which has helped to spread norms about how to run startups, encourage younger people to do them, reduce the power of VCs, and have other significant effects on the ecosystem. The partners often say they became famous and influential due to reddit → dropbox → airbnb, so much of their general impact was due to having a couple of concrete successes.
Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact—finding the “airbnb of EA” or the “Newton of EA”.
Thanks!
Yeah, agreed. Though part of what I was trying to say is that, as you mentioned, we have the high/low distinction already—“implementing” that distinction would just be giving an explicit name to something that already exists. And something that has a name is easier to refer to and talk about, so having some set of terms for the two types could make it easier to be more transparent about the existence of the distinction when doing outreach. (This would be the case regardless of whether we want to expand EA to lower-impact causes or not.)
I guess the question here is, how much would efforts to bring in low-level EAs hurt the efforts to bring in high-level EAs. My intuition would be that the net effect would be to bring in more high-level EAs overall (a smaller percentage of incoming people would become high-level EAs, but that would be offset by there being more incoming people overall), but I don’t have any firm support for that intuition and one would have to test it somehow.
I agree that the best way to promote general ideas can be to live them. But I think we need to be more specific about what a “huge impact” would mean in this context. E.g. High Impact Science suggests that Norman Borlaug is one of the people who have had the biggest positive impact on the world—but most people have probably never heard of him. So for spreading social norms, it’s not enough to live the ideas and make a big impact, one has to do it in a sufficiently visible way.