Let’s look at the three arguments for focusing more on shorttermist content:
1. The EA movement does important work in these two causes I think this is basically a “this doesn’t represent members views” argument: “When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement”. Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:
people demanded EA Handbook 2.0 refocus away from longtermism, or
Bostrom’s excellent talk on crucial considerations was removed from effectivealtruism.org
it would have been better to focus on the merits of the ideas, rather than follow majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: “ask not what EA will do for you but what together we can do for utility”! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement’s members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear “this doesn’t represent members views”, we hear alarm bells ringing...
2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them.
Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience’s attention, so it’s necessary to focus on attracting those who can do the most good in priority areas.
It’s frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…
People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they’d probably find it off-putting).
In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything.
Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.
I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.
Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:
We’re presenting introductory material, and the resources are readers attention
B is popular with people who identify with the EA community
B is popular with people who are using logical arguments?
I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation—better to either (A) present the arguments, (e.g. arguments against Nick Beckstead’s thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people’s views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics—as a relative non-expert, I certainly didn’t. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.
I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.
There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.
A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2⁄3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”)
I place negligible weight on the fact that “the EA leaders forum is very long-termist” because (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors?
B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.
Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?
I like this comment! But I think I would actually go a step further:
I don’t dispute the expertise of the people you listed.
I haven’t thought too hard about this, but I think I do actually dispute the expertise of the people Ryan listed. But that is nothing personal about them!
When I think of the term ‘expert’ I usually have people in mind who are building on decades of knowledge of a lot of different practitioners in their field. The field of global priorities has not existed long enough and has not developed enough depth to have meaningful expertise as I think of the term.
I am very happy to defer to experts if they have orders of magnitude more knowledge than me in a field. I will gladly accept the help of an electrician for any complex electrical problem despite the fact that I changed a light switch that one time.
But I don’t think that applies to global priorities for people who are already heavily involved in the EA community—the gap between these EAs and the global priorities ‘experts’ listed in terms of knowledge about global priorities is much, much smaller than between me and electrician about electrics. So it’s much less obvious whether it makes sense for these people to defer.
The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
I agree that incentives within EA lean (a bit) longtermist. The incentives don’t come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden’s case, he switched due to a combination of “the force of the arguments” and being impressed with the quality of thought of some longtermists. For example, Holdenwrites “I’ve been particularly impressed with Carl Shulman’s reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell’s.” It’s reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the “incentive structure” as something that is monolithic, or that can explain away major (reasonable) changes.
B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I’m already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an “Introduction to prioritization”, and also, online conversation would happen on a “priorities forum”, and so on (or something similar). It’s tougher to ask people to switch unilaterally.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization.
I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :
[EAs] like working on AI. Working on AI is fun. If they think what they’re doing is reducing the risk of AI, I haven’t seen that proof of that. They have a model. Some people want to go to Mars. Some people want to live forever. Philanthropy has got a lot of heterogeneity in it. If people bring their intelligence, some passion, overall, it tends to work out. There’s some dead ends, but every once in a while, we get the Green Revolution or new vaccines or models for how education can be done better. It’s not something where the philanthropists all homogenize what they’re doing.
Sounds to me like he’s thought about this stuff.
I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”
In my ideal universe, the podcast would be called an “Introduction to prioritization”, but also, online conversation would happen on a “priorities forum”, and so on.
I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand?
OK, so essentially you don’t own up to strawmanning my views?
You… ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
I’m...stuff like
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You’ve …”authentic”
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
I agree… EA brand?
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work by a 4:1 ratio anyway.
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.
Most of what you’ve written about the longtermist shift seems true to me, but I’d like to raise a couple of minor points:
The EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list
Very few people ever clicked on the list of articles featured on the EA.org landing page (probably because the “Learn More” link was more prominent — it got over 10x the number of clicks as every article on that list combined). The “Learn More” link, in turn, led to an intro article that prominently featured global poverty, as well as a list of articles that included our introduction to global poverty. The site’s “Resources” page was also much more popular than the homepage reading list, and always linked to the global poverty article.
So while that article was mistakenly left out of one of EA.org’s three lists of articles, it was by far the least important of those lists, based on traffic numbers.
EA Globals highlighted longtermist content, etc.
Do you happen to have numbers on when/how EA Global content topics shifted to be more longtermist? I wouldn’t be surprised if this change happened, but I don’t remember seeing anyone write it up, and the last few conferences (aside from EA Global: Reconnect, which had four total talks) seem to have a very balanced mix of content.
On #1: I agree that we should focus on the merits of the ideas on what causes to prioritize. However, it’s not clear to me that longtermism has convincingly won the cause prioritization / worldview prioritization debate that it should be ~95% of an Intro to EA collection, which is what 80K’s feed is. I see CEA and OpenPhil as expert organizations on the same level as 80K’s expertise, and CEA and OpenPhil still prioritize GH&D and Animal Welfare substantially, so I don’t see why 80K’s viewpoint or the longtermist viewpoint should be given especially large deference to, so much so that ~95% of an Intro to EA collection is longtermist content.
On #2: My underlying argument is that I do see a lot of merit in the arguments that GH&D or animal welfare should be top causes. And I think people in the EA community are generally very smart and thoughtful, so I think it’s thoughtful and smart that a lot of EAs, including some leaders, prioritize GH&D and animal welfare. And I think they would have a lot of hesitations with the EA movement being drastically more longtermist than it already currently is, since that can lessen the number of smart, thoughtful people who get interested in and work on their cause, even if their cause has strong merits to be a top priority.
Which “experts” are you asking us to defer to? The people I find most convincing re: philosophy of giving are people like Will MacAskill or Ajeya Cotra, who are both spoken in favour of worldview diversification and moral uncertainty.
Let’s look at the three arguments for focusing more on shorttermist content:
1. The EA movement does important work in these two causes
I think this is basically a “this doesn’t represent members views” argument: “When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement”. Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:
people demanded EA Handbook 2.0 refocus away from longtermism, or
Bostrom’s excellent talk on crucial considerations was removed from effectivealtruism.org
it would have been better to focus on the merits of the ideas, rather than follow majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: “ask not what EA will do for you but what together we can do for utility”! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement’s members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear “this doesn’t represent members views”, we hear alarm bells ringing...
2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them.
Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience’s attention, so it’s necessary to focus on attracting those who can do the most good in priority areas.
It’s frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…
People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they’d probably find it off-putting).
In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything.
Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.
I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.
Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:
We’re presenting introductory material, and the resources are readers attention
B is popular with people who identify with the EA community
B is popular with people who are using logical arguments?
I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation—better to either (A) present the arguments, (e.g. arguments against Nick Beckstead’s thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people’s views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics—as a relative non-expert, I certainly didn’t. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.
I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.
There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.
A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2⁄3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”)
I place negligible weight on the fact that “the EA leaders forum is very long-termist” because (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors?
B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.
Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?
I like this comment! But I think I would actually go a step further:
I haven’t thought too hard about this, but I think I do actually dispute the expertise of the people Ryan listed. But that is nothing personal about them!
When I think of the term ‘expert’ I usually have people in mind who are building on decades of knowledge of a lot of different practitioners in their field. The field of global priorities has not existed long enough and has not developed enough depth to have meaningful expertise as I think of the term.
I am very happy to defer to experts if they have orders of magnitude more knowledge than me in a field. I will gladly accept the help of an electrician for any complex electrical problem despite the fact that I changed a light switch that one time.
But I don’t think that applies to global priorities for people who are already heavily involved in the EA community—the gap between these EAs and the global priorities ‘experts’ listed in terms of knowledge about global priorities is much, much smaller than between me and electrician about electrics. So it’s much less obvious whether it makes sense for these people to defer.
This is a really insightful comment.
The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
I agree that incentives within EA lean (a bit) longtermist. The incentives don’t come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden’s case, he switched due to a combination of “the force of the arguments” and being impressed with the quality of thought of some longtermists. For example, Holden writes “I’ve been particularly impressed with Carl Shulman’s reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell’s.” It’s reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the “incentive structure” as something that is monolithic, or that can explain away major (reasonable) changes.
B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I’m already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an “Introduction to prioritization”, and also, online conversation would happen on a “priorities forum”, and so on (or something similar). It’s tougher to ask people to switch unilaterally.
You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization.
I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :
Sounds to me like he’s thought about this stuff.
You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”
I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand?
OK, so essentially you don’t own up to strawmanning my views?
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
Just a quick point on this:
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
Thanks for sharing your thinking!
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.
Most of what you’ve written about the longtermist shift seems true to me, but I’d like to raise a couple of minor points:
Very few people ever clicked on the list of articles featured on the EA.org landing page (probably because the “Learn More” link was more prominent — it got over 10x the number of clicks as every article on that list combined). The “Learn More” link, in turn, led to an intro article that prominently featured global poverty, as well as a list of articles that included our introduction to global poverty. The site’s “Resources” page was also much more popular than the homepage reading list, and always linked to the global poverty article.
So while that article was mistakenly left out of one of EA.org’s three lists of articles, it was by far the least important of those lists, based on traffic numbers.
Do you happen to have numbers on when/how EA Global content topics shifted to be more longtermist? I wouldn’t be surprised if this change happened, but I don’t remember seeing anyone write it up, and the last few conferences (aside from EA Global: Reconnect, which had four total talks) seem to have a very balanced mix of content.
Hi Ryan,
On #1: I agree that we should focus on the merits of the ideas on what causes to prioritize. However, it’s not clear to me that longtermism has convincingly won the cause prioritization / worldview prioritization debate that it should be ~95% of an Intro to EA collection, which is what 80K’s feed is. I see CEA and OpenPhil as expert organizations on the same level as 80K’s expertise, and CEA and OpenPhil still prioritize GH&D and Animal Welfare substantially, so I don’t see why 80K’s viewpoint or the longtermist viewpoint should be given especially large deference to, so much so that ~95% of an Intro to EA collection is longtermist content.
On #2: My underlying argument is that I do see a lot of merit in the arguments that GH&D or animal welfare should be top causes. And I think people in the EA community are generally very smart and thoughtful, so I think it’s thoughtful and smart that a lot of EAs, including some leaders, prioritize GH&D and animal welfare. And I think they would have a lot of hesitations with the EA movement being drastically more longtermist than it already currently is, since that can lessen the number of smart, thoughtful people who get interested in and work on their cause, even if their cause has strong merits to be a top priority.
Which “experts” are you asking us to defer to? The people I find most convincing re: philosophy of giving are people like Will MacAskill or Ajeya Cotra, who are both spoken in favour of worldview diversification and moral uncertainty.