I first became closely involved/interested in EA in 2013, and I think the “change to longtermism” is overstated.
Longtermism isn’t new. As a newbie, I learned about global catastrophic risk as a core, obvious part of the movement (through posts like this one) and read this book on GCRs — which was frequently recommended at the time, and published more than a decade before The Precipice.
And near-term work hasn’t gone away. Nearly every organization in the global health space that was popular with EA early on still exists, and I’d guess they almost all have more funding than they did in 2013. (And of course, lots of new orgs have popped up.) I know less about animal welfare, but I’d guess that there is much more funding in that space than there was in 2013. EA’s rising tide has lifted a lot of boats.
Put another way: If you want to do impact-focused work on global health or animal welfare, I think it’s easier to do so now than it was in 2013. The idea that EA has turned its back on these areas just doesn’t track for me.
I think a focus on absolute values seems misleading here. You’re totally right the absolute value of funding has gone up for all cause areas (see this spreadsheet). However, there’s also a pretty clear trend that the relative funding towards global health and animal welfare has gone done quite a lot, from global health going from approximately all of EA funds in 2012 to 54% to 2022. Similarly for Animal Welfare, which seemed to peak at 16% in 2019, might only be 5% in 2022.
I think this relative shift of attention/priorities is what people are often referring to when they say “EA has changed” etc.
I agree. It seems obvious that effective altruism has changed in important ways. Yes, some characterisations of this change are exaggerated, but to deny that there’s been a change altogether doesn’t seem right to me.
Yeah I agree with that—I guess I was using funding amounts as a proxy for general EA attention which includes stuff like EA Forum posts, orgs working on an issues, focus of EA intro materials, etc etc.
I wish, it’s so interesting! I found it linked (very surprisingly) in the Time article about Will/longtermism. From this quote below:
The expansion has been fueled by a substantial rise in donations. In 2021, EA-aligned foundations distributed more than $600 million in publicly listed grants—roughly quadruple what they gave five years earlier.
Aaron, I agree that these global health issues are getting the serious attention they need now, and I don’t think that EA has turned its back on these issues.
Rather, it’s the narrative about EA that feels like it’s shifting. The New Yorker’s piece describes the era of “bed nets” as over, and while that’s not a true statement—when you looking at the funding—the attention placed on longtermism shifts the EA brand in a big way. One of EA’s strengths was that anyone could do it—“here’s how you can save one life today.” The practical, immediate impact of EA is appealing to a lot of young people who want to give back and help others. With longtermism, the ability to be an effective altruist is largely limited to those with advanced knowledge and technical skills to engage in these highly complex problems.
As press and attention is drawn to this work, it may come to define the EA movement, and, over time, EA may become less accessible to people who would have been drawn to its origin mission. As an outsider to EA, who serves as a PM who builds AI models, I’m not able to assess which AI alignment charities are the most effective charities, and that saps confidence that my donation will be effective.
Again, this is purely a branding/marketing problem, but it still could be an existential risk for the movement. You could imagine a world where these two initiatives could build their own brands—longtermism could become 22nd Century Philanthropy (22C!), and people who are committed to this cause could help to build this movement. At the same time, there are millions of people who want to funnel billions of dollars to empirically-validated charities that make the world immediately better, and the EA brand serves as a clearly-defined entry point into doing that work.
Over EA’s history, the movement has always had a porous quality of inviting outsiders and enabling them to rapidly become part of the community by enabling people to concretely understand and evaluate philanthropic endeavors, but in a shift to difficult to understand and abstract longermism issues, EA may lose that quality that drives its growth. In short, the EA movement could be defined as making the greatest impact on the greatest number of people today, and 22nd Century Philanthropy could exist as a movement for impacting the greatest number of people tomorrow, with both movements able to attract people passionate about these different causes.
I first became closely involved/interested in EA in 2013, and I think the “change to longtermism” is overstated.
Longtermism isn’t new. As a newbie, I learned about global catastrophic risk as a core, obvious part of the movement (through posts like this one) and read this book on GCRs — which was frequently recommended at the time, and published more than a decade before The Precipice.
And near-term work hasn’t gone away. Nearly every organization in the global health space that was popular with EA early on still exists, and I’d guess they almost all have more funding than they did in 2013. (And of course, lots of new orgs have popped up.) I know less about animal welfare, but I’d guess that there is much more funding in that space than there was in 2013. EA’s rising tide has lifted a lot of boats.
Put another way: If you want to do impact-focused work on global health or animal welfare, I think it’s easier to do so now than it was in 2013. The idea that EA has turned its back on these areas just doesn’t track for me.
I think a focus on absolute values seems misleading here. You’re totally right the absolute value of funding has gone up for all cause areas (see this spreadsheet). However, there’s also a pretty clear trend that the relative funding towards global health and animal welfare has gone done quite a lot, from global health going from approximately all of EA funds in 2012 to 54% to 2022. Similarly for Animal Welfare, which seemed to peak at 16% in 2019, might only be 5% in 2022.
I think this relative shift of attention/priorities is what people are often referring to when they say “EA has changed” etc.
I agree. It seems obvious that effective altruism has changed in important ways. Yes, some characterisations of this change are exaggerated, but to deny that there’s been a change altogether doesn’t seem right to me.
It may be more about how much of the conversation space is taken up with different topics rather than the funding amounts (relative or absolute).
I think even if there had been a larger animal funder keepings the percentages the same, but no change in topics, people will still sense a shift.
Yeah I agree with that—I guess I was using funding amounts as a proxy for general EA attention which includes stuff like EA Forum posts, orgs working on an issues, focus of EA intro materials, etc etc.
That’s an amazing spreadsheet you linked there! Did you collect the data yourself?
I wish, it’s so interesting! I found it linked (very surprisingly) in the Time article about Will/longtermism. From this quote below:
https://www.givewell.org/about/impact GiveWell apparently has different (higher) numbers
Aaron, I agree that these global health issues are getting the serious attention they need now, and I don’t think that EA has turned its back on these issues.
Rather, it’s the narrative about EA that feels like it’s shifting. The New Yorker’s piece describes the era of “bed nets” as over, and while that’s not a true statement—when you looking at the funding—the attention placed on longtermism shifts the EA brand in a big way. One of EA’s strengths was that anyone could do it—“here’s how you can save one life today.” The practical, immediate impact of EA is appealing to a lot of young people who want to give back and help others. With longtermism, the ability to be an effective altruist is largely limited to those with advanced knowledge and technical skills to engage in these highly complex problems.
As press and attention is drawn to this work, it may come to define the EA movement, and, over time, EA may become less accessible to people who would have been drawn to its origin mission. As an outsider to EA, who serves as a PM who builds AI models, I’m not able to assess which AI alignment charities are the most effective charities, and that saps confidence that my donation will be effective.
Again, this is purely a branding/marketing problem, but it still could be an existential risk for the movement. You could imagine a world where these two initiatives could build their own brands—longtermism could become 22nd Century Philanthropy (22C!), and people who are committed to this cause could help to build this movement. At the same time, there are millions of people who want to funnel billions of dollars to empirically-validated charities that make the world immediately better, and the EA brand serves as a clearly-defined entry point into doing that work.
Over EA’s history, the movement has always had a porous quality of inviting outsiders and enabling them to rapidly become part of the community by enabling people to concretely understand and evaluate philanthropic endeavors, but in a shift to difficult to understand and abstract longermism issues, EA may lose that quality that drives its growth. In short, the EA movement could be defined as making the greatest impact on the greatest number of people today, and 22nd Century Philanthropy could exist as a movement for impacting the greatest number of people tomorrow, with both movements able to attract people passionate about these different causes.