As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.
In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.
Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.
In practice, this means today’s EA is more willing to consider altruistic impact that can’t be easily accurately measured or quantified, especially with (some) Longtermist interests. I find this to be a rather damning weakness, although one could make the case it is also a strength.
This also extends to outreach.
For example, I wouldn’t be surprised if an EA would give a dollar or volunteer for the seeing eye-dog organizations [or any other ineffective organizations] under the justification that this is “community-building” and like the borg, someday we will be able to assimilate them and make them more effective or recruit new people in EA.
To me and other old guard EAs, it’s wishful thinking, because it makes EA less pure to its epistemic roots, esp. overtime as non-EA ideas enter the fold and influence the group. One example of this is how DEI initiatives are wholeheartedly welcomed by EA organizations whereas in fact there is little evidence the DEI/progressive way of hiring personnel and staff results in better performance outcomes compared to normal hiring that doesn’t factor in or give advantage/edge to a candidate based on their ethnicity, gender, race, or sexual orientation.
But even more so, with cause prioritization. In the past, I felt that it became very difficult to have your championed or preferred cause even considered remotely effective. In fact, the null hypothesis was that your cause isn’t effective… and that most causes weren’t.
Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence. A less elitist and stringent approach, but inevitable once you become big tent. Some people feel this made EA a friendlier place. Let’s just say that today you’d be less likely to be kicked out of an EA meeting for being naively optimistic and without a care for figures/numbers, and more likely to be kicked out for being overtly critical (or even mean) even if that criticalness was the strict attitude of early EA meetings that turned a lot of people off from EA (including myself, when I first heard about it. I later came around to appreciate and welcome that sort of attitude and its relative rarity in the world. Strict and robust epistemics are underappreciated).
For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.
Unlike Disney and its iron claw grip over its brands and properties, it’s much easier to call oneself or identify an EA nowadays or as part of the EA sphere… because well, anything and everything can be EA. The EA brand, whilst once tightly controlled and small, has now grown and it can be difficult to tell the fake Gucci bags from the real deal when both are sold at the same market.
My greatest fear is that EA will overtime become just A, without the E, and lose the initial ruthless data and results driven form of moral concern.
I don’t think core EA is more “big tent” now than it used to be. Relatively more intellectual effort is devoted to longtermism now than global health and development, which represents more a shift in focus than a widening of focus.
What you might be seeing is an influx of money across the board, which results at least partially in decreasing the bar of funding for more speculative interventions.
Also, many people now believe that the ROI of movement building is incredibly high, which I think was less true even a few years ago. So net positive but not very exciting movement building interventions—both things that look more like traditional “community building” and things that look like “support specific promising young EAs”—are much more likely to be funded than before. In the “support specific promising young EAs” case, this might be true even if they say dumb things or are currently pursuing lower-impact cause areas, as long as the CB case for it is sufficiently strong (above some multiplier for funding, and reasonably probable to be net positive).
I think I no longer endorse this comment. Not sure but it does seem like there’s a much broader set of things that people research, fund, and work on (e.g. I don’t think there was that much active work on biosecurity 5 years ago).
I actually don’t relate to much of what you’re saying here.
For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.
I know jellyfish is a fictional example. Can you give a real example of this happening? I’m not sure what you mean by “bam, you are now an EA”. What is the metric for this?
I wrote a post about two years ago arguing that the promotion of philosophy education in schools could be a credible longtermist intervention. I think reception was fairly lukewarm and it is clear that my suggestion has not been adopted as a longtermist priority by the community. Just because there were one or two positive comments and OKish karma doesn’t mean anything—no one has acted on it. It seems to me that it’s a similar story for most new cause suggestions.
Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence.
This doesn’t seem true to me, but I’m not an “old guard EA”. I’d be curious to know what examples of this you have in mind.
As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.
In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.
Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.
In practice, this means today’s EA is more willing to consider altruistic impact that can’t be easily accurately measured or quantified, especially with (some) Longtermist interests. I find this to be a rather damning weakness, although one could make the case it is also a strength.
This also extends to outreach.
For example, I wouldn’t be surprised if an EA would give a dollar or volunteer for the seeing eye-dog organizations [or any other ineffective organizations] under the justification that this is “community-building” and like the borg, someday we will be able to assimilate them and make them more effective or recruit new people in EA.
To me and other old guard EAs, it’s wishful thinking, because it makes EA less pure to its epistemic roots, esp. overtime as non-EA ideas enter the fold and influence the group. One example of this is how DEI initiatives are wholeheartedly welcomed by EA organizations whereas in fact there is little evidence the DEI/progressive way of hiring personnel and staff results in better performance outcomes compared to normal hiring that doesn’t factor in or give advantage/edge to a candidate based on their ethnicity, gender, race, or sexual orientation.
But even more so, with cause prioritization. In the past, I felt that it became very difficult to have your championed or preferred cause even considered remotely effective. In fact, the null hypothesis was that your cause isn’t effective… and that most causes weren’t.
Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence. A less elitist and stringent approach, but inevitable once you become big tent. Some people feel this made EA a friendlier place. Let’s just say that today you’d be less likely to be kicked out of an EA meeting for being naively optimistic and without a care for figures/numbers, and more likely to be kicked out for being overtly critical (or even mean) even if that criticalness was the strict attitude of early EA meetings that turned a lot of people off from EA (including myself, when I first heard about it. I later came around to appreciate and welcome that sort of attitude and its relative rarity in the world. Strict and robust epistemics are underappreciated).
For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.
Unlike Disney and its iron claw grip over its brands and properties, it’s much easier to call oneself or identify an EA nowadays or as part of the EA sphere… because well, anything and everything can be EA. The EA brand, whilst once tightly controlled and small, has now grown and it can be difficult to tell the fake Gucci bags from the real deal when both are sold at the same market.
My greatest fear is that EA will overtime become just A, without the E, and lose the initial ruthless data and results driven form of moral concern.
I don’t think core EA is more “big tent” now than it used to be. Relatively more intellectual effort is devoted to longtermism now than global health and development, which represents more a shift in focus than a widening of focus.
What you might be seeing is an influx of money across the board, which results at least partially in decreasing the bar of funding for more speculative interventions.
Also, many people now believe that the ROI of movement building is incredibly high, which I think was less true even a few years ago. So net positive but not very exciting movement building interventions—both things that look more like traditional “community building” and things that look like “support specific promising young EAs”—are much more likely to be funded than before. In the “support specific promising young EAs” case, this might be true even if they say dumb things or are currently pursuing lower-impact cause areas, as long as the CB case for it is sufficiently strong (above some multiplier for funding, and reasonably probable to be net positive).
I think I no longer endorse this comment. Not sure but it does seem like there’s a much broader set of things that people research, fund, and work on (e.g. I don’t think there was that much active work on biosecurity 5 years ago).
I actually don’t relate to much of what you’re saying here.
I know jellyfish is a fictional example. Can you give a real example of this happening? I’m not sure what you mean by “bam, you are now an EA”. What is the metric for this?
I wrote a post about two years ago arguing that the promotion of philosophy education in schools could be a credible longtermist intervention. I think reception was fairly lukewarm and it is clear that my suggestion has not been adopted as a longtermist priority by the community. Just because there were one or two positive comments and OKish karma doesn’t mean anything—no one has acted on it. It seems to me that it’s a similar story for most new cause suggestions.
This doesn’t seem true to me, but I’m not an “old guard EA”. I’d be curious to know what examples of this you have in mind.
Strongly upvoted, but this should be its own top-level post.