Thanks for making this podcast feed! I have a few comments about what you said here:
Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well. The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development.
We did seriously consider including episodes with Lewis Bollard and Rachel Glennester, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our ‘top problems’), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our ‘episode 0’, as well as the outro to Holden’s episode.
I think if you are going to call this feed “Effective Altruism: An Introduction”, it doesn’t make sense to skew the selection towards longtermism so heavily. Maybe you should have phrased the feed as “An Introduction to Effective Altruism & Longtermism” given the current list of episodes.
In particular, I think it would be better if the Lewis Bollard episode was added, and one on Global Health & Dev’t, such as either the episode with Rachel Glennerster or James Snowden (which I liked).
If 80K wanted to limit the feed to 10 episodes, then that means 2 episodes would have to be taken out. As much as I like the episode with David Denkenberger, I don’t think learning about ALLFED is “core” to EA, so that’s one that I would have taken out. A 2nd episode to take out is a harder choice, but I would pick between taking one out among the episodes with Will MacAskill, Paul Christiano, or Hilary Greaves. I guess I would pick the one with Will, since I didn’t get much value from that episode, and I’m unsure if others would.
Alternatively, an easier solution is to expand the number of episodes in the feed to 12. 12 isn’t that much farther from 10.
I think it is important to include an episode on animal welfare and global health and development because
The EA movement does important work in these two causes
Many EAs still care about or work on these two causes, and would likely want more people to continue entering them
People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them, even when it could be, if they just learned more about animal welfare or global health and development.
As a broader point, when we introduce or talk about EA, especially with large reach (like 80K’s reach), I think it’s important to convey that the EA movement works on a variety of causes and worldviews.
Even from a longtermist perspective, I think the EA community is better the “broader” it is and the more it also includes work on other “non-longtermist” causes, such as global health and development and animal welfare. This way, the community can be bigger, and it’s probably easier to influence things for the long-term better the bigger the community is. For example, more people would be in government or in influential roles.
These are just my thoughts. I’m open to hearing others’ thoughts too!
Possible-bias disclosure: am longtermist, focused on x-risk.
I haven’t heard all of the podcast episodes under consideration, but methodologically I like the idea of there being a wide variety of ‘intro’ EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods. If there’s an unresolved disagreement about one of those things, I’d usually rather see people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products).
In this case, I’d rather see a new podcast episodes collection that’s more shorttermist and see whether a cohesive, useful playlist can be designed that way.
And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I’d like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what “arcs” they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.
Hi Rob, I also like the idea of “there being a wide variety of ‘intro’ EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods.”
However, it’s not easy for “people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products).” Most people do not have the brand or reach of 80,000 Hours.
It’s likely that only very popular figures in the EA community would get substantial reach if they made an Intro to EA collection, and it would still likely not be as large as the reach of 80,000 Hours’s. As such, 80,000 Hours’s choice of what Intro to EA resources to include is quite hard to compete with, and thus should ideally be more representative of what the community thinks.
I do see that many hours probably went into picking the ten episodes. But it seems like 80K didn’t get enough feedback from more people (or a wider variety of people) before releasing this. Hence I’m giving my feedback this way, and judging from the upvotes, quite a few people agree with me.
Of course, I agree that more testing and re-listening could be done. But I would think that a significant % of people who get interested in EA, including quite a few people who are into longtermism, first get interested in global health and development or animal welfare, before getting interested in longtermism. And I think 80K might be losing out on these people with this feed.
And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I’d like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what “arcs” they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.
I agree that that’s how I want the eventual decision to be made. I’m not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian’s or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.
This has some flavor of ‘X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I’ll defer to them’, which I think EAs generally say/think/do too often. It’s very easy to miss things even when you’ve worked on something for a while (esp. if it’s more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people’s reactions are explicitly part of what you’re optimizing for. (Obviously what we care about are new-people’s reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)
As with everything, there’s some risk of the opposite (‘not expecting enough of professionals?’), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it’s the opposite with experts outside of EA).
Meta: Rereading your comment, I think it’s more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it’s good to leave thoughts on possible interpretations of what people write.
Criticizing 80K when you think they’re wrong (especially about object-level factual questions like “is longtermism true?”).
Criticizing EAs when you think they’re wrong even if you think they’ve spent hundreds of hours reaching some conclusion, or producing some artifact.
(I.e.: try to model how much thought and effort people have put into things, and keep in mind that no amount of effort makes you infallible. Even if it turns out the person didn’t make a mistake, raising the question of whether they messed up can help make it clearer why a choice was made.)
Using the comment section on a post like this to solicit interest in developing a competitor-podcast-episode-intro-resource.
Loudly advertising your competitor episode list here, so people can compare the merits of 80K’s playlist to yours.
The thing I don’t endorse is what I talk about in my comments.
Conversely, if the 80K intro podcast list was just tossed together in a few minutes without much concern for narrative flow / sequencing / cohesiveness, then I’m much less averse to redesign-via-quick-EA-Forum-comments. :)
[Just threading responses to different topics separately.] Regarding including Dave Denkenberger’s episode, the reason for that isn’t that alternative foods or disaster resilience are especially central EA problem areas.
Rather, in keeping with the focus on worldview and ‘how we think’, we put it in there as a demonstration of entrepreneurship, independent thinking, and general creativity. I can totally see how people could prefer that it be replaced with a different theme, but that was the reasoning. (We give the motivation for including each episode in their respective intros.) — Rob and Keiran
Ah, thanks for the clarification! That makes me feel less strongly about the lack of diversity. I interpreted it as prioritising ALLFED over global health stuff as representative of the work of the EA movement, which felt glaringly wrong
Let’s look at the three arguments for focusing more on shorttermist content:
1. The EA movement does important work in these two causes I think this is basically a “this doesn’t represent members views” argument: “When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement”. Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:
people demanded EA Handbook 2.0 refocus away from longtermism, or
Bostrom’s excellent talk on crucial considerations was removed from effectivealtruism.org
it would have been better to focus on the merits of the ideas, rather than follow majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: “ask not what EA will do for you but what together we can do for utility”! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement’s members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear “this doesn’t represent members views”, we hear alarm bells ringing...
2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them.
Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience’s attention, so it’s necessary to focus on attracting those who can do the most good in priority areas.
It’s frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…
People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they’d probably find it off-putting).
In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything.
Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.
I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.
Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:
We’re presenting introductory material, and the resources are readers attention
B is popular with people who identify with the EA community
B is popular with people who are using logical arguments?
I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation—better to either (A) present the arguments, (e.g. arguments against Nick Beckstead’s thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people’s views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics—as a relative non-expert, I certainly didn’t. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.
I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.
There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.
A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2⁄3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”)
I place negligible weight on the fact that “the EA leaders forum is very long-termist” because (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors?
B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.
Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?
I like this comment! But I think I would actually go a step further:
I don’t dispute the expertise of the people you listed.
I haven’t thought too hard about this, but I think I do actually dispute the expertise of the people Ryan listed. But that is nothing personal about them!
When I think of the term ‘expert’ I usually have people in mind who are building on decades of knowledge of a lot of different practitioners in their field. The field of global priorities has not existed long enough and has not developed enough depth to have meaningful expertise as I think of the term.
I am very happy to defer to experts if they have orders of magnitude more knowledge than me in a field. I will gladly accept the help of an electrician for any complex electrical problem despite the fact that I changed a light switch that one time.
But I don’t think that applies to global priorities for people who are already heavily involved in the EA community—the gap between these EAs and the global priorities ‘experts’ listed in terms of knowledge about global priorities is much, much smaller than between me and electrician about electrics. So it’s much less obvious whether it makes sense for these people to defer.
The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
I agree that incentives within EA lean (a bit) longtermist. The incentives don’t come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden’s case, he switched due to a combination of “the force of the arguments” and being impressed with the quality of thought of some longtermists. For example, Holdenwrites “I’ve been particularly impressed with Carl Shulman’s reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell’s.” It’s reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the “incentive structure” as something that is monolithic, or that can explain away major (reasonable) changes.
B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I’m already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an “Introduction to prioritization”, and also, online conversation would happen on a “priorities forum”, and so on (or something similar). It’s tougher to ask people to switch unilaterally.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization.
I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :
[EAs] like working on AI. Working on AI is fun. If they think what they’re doing is reducing the risk of AI, I haven’t seen that proof of that. They have a model. Some people want to go to Mars. Some people want to live forever. Philanthropy has got a lot of heterogeneity in it. If people bring their intelligence, some passion, overall, it tends to work out. There’s some dead ends, but every once in a while, we get the Green Revolution or new vaccines or models for how education can be done better. It’s not something where the philanthropists all homogenize what they’re doing.
Sounds to me like he’s thought about this stuff.
I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”
In my ideal universe, the podcast would be called an “Introduction to prioritization”, but also, online conversation would happen on a “priorities forum”, and so on.
I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand?
OK, so essentially you don’t own up to strawmanning my views?
You… ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
I’m...stuff like
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You’ve …”authentic”
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
I agree… EA brand?
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work by a 4:1 ratio anyway.
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.
Most of what you’ve written about the longtermist shift seems true to me, but I’d like to raise a couple of minor points:
The EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list
Very few people ever clicked on the list of articles featured on the EA.org landing page (probably because the “Learn More” link was more prominent — it got over 10x the number of clicks as every article on that list combined). The “Learn More” link, in turn, led to an intro article that prominently featured global poverty, as well as a list of articles that included our introduction to global poverty. The site’s “Resources” page was also much more popular than the homepage reading list, and always linked to the global poverty article.
So while that article was mistakenly left out of one of EA.org’s three lists of articles, it was by far the least important of those lists, based on traffic numbers.
EA Globals highlighted longtermist content, etc.
Do you happen to have numbers on when/how EA Global content topics shifted to be more longtermist? I wouldn’t be surprised if this change happened, but I don’t remember seeing anyone write it up, and the last few conferences (aside from EA Global: Reconnect, which had four total talks) seem to have a very balanced mix of content.
On #1: I agree that we should focus on the merits of the ideas on what causes to prioritize. However, it’s not clear to me that longtermism has convincingly won the cause prioritization / worldview prioritization debate that it should be ~95% of an Intro to EA collection, which is what 80K’s feed is. I see CEA and OpenPhil as expert organizations on the same level as 80K’s expertise, and CEA and OpenPhil still prioritize GH&D and Animal Welfare substantially, so I don’t see why 80K’s viewpoint or the longtermist viewpoint should be given especially large deference to, so much so that ~95% of an Intro to EA collection is longtermist content.
On #2: My underlying argument is that I do see a lot of merit in the arguments that GH&D or animal welfare should be top causes. And I think people in the EA community are generally very smart and thoughtful, so I think it’s thoughtful and smart that a lot of EAs, including some leaders, prioritize GH&D and animal welfare. And I think they would have a lot of hesitations with the EA movement being drastically more longtermist than it already currently is, since that can lessen the number of smart, thoughtful people who get interested in and work on their cause, even if their cause has strong merits to be a top priority.
Which “experts” are you asking us to defer to? The people I find most convincing re: philosophy of giving are people like Will MacAskill or Ajeya Cotra, who are both spoken in favour of worldview diversification and moral uncertainty.
Thanks for the feedback. These comments have prompted us to think about the issue some more — I won’t be able to get you a full answer before the weekend, but we’re working on a bigger picture solution to this which I hope will address some of your worries.
We should be able to fill you in on our plans next week. (ADDED: screw it we’ll just finish it tonight — posted as a new top level comment.)
I strongly second all of this. I think 80K represents quite a lot of EAs public facing outreach, and that it’s important to either be explicit that this is longtermism focused, or to try to be representative of what happens in the movement as a whole. I think this especially holds for somewhat explicitly framed as an introductory resource, since I expect many people get grabbed by global health/animal welfare angles who don’t get grabbed by longtermist angles.
Though I do see the countervailing concern that 80K is strongly longtermism focused, and that it’d be disingenuous for an introduction to 80K to give disproportionate time to neartermist causes, if those are explicitly de-prioritised
Hi Neel thanks for these thoughts. I’ve responded to the broader issue in a new top-level comment.
Just on the point that we should be explicit that this is longtermism focused, while longtermism isn’t in the title I tried to make it pretty explicit in the series’ ‘Episode 0’:
One final note before we start. We wanted to keep this introduction to just ten episodes, which meant we had to make some tough decisions about what made the cut. This selection skews towards focusing on longtermism and efforts to preserve a long and positive future for humanity, because at 80,000 Hours we think that’s a particularly promising way for many of our readers to make a difference.
But as I was saying just a moment ago, people in the community have a wide range of views on the question of what is most valuable to work on, and unfortunately there’s no room for them all to get a dedicated episode in this series.
The good news is there are episodes about many more problems on the main 80,000 Hours Podcast feed to satisfy your curiosity. For instance, if you’d like to hear more about global health and development I can recommend #49 – Dr Rachel Glennerster on a year’s worth of education for under $1 and other development best buys.
If you’d prefer to hear more about climate change, I can suggest #85 – Mark Lynas on climate change, societal collapse & nuclear energy
And if you want to hear more about efforts to improve the wellbeing of animals, especially those raised in farms, I can recommend going and listening to #8 – Lewis Bollard on how to end factory farming in our lifetimes.
There’s also this earlier on:
A 2019 survey of people involved in effective altruism found that 22% thought global poverty should be a top priority, 16% thought the same of climate change, and 11% said so of risks from advanced artificial intelligence. So a wide range of views on which causes are most pressing are represented in the group.
Thanks for the clarification. I’m glad that’s in there, and I’ll feel better about this once the ‘Top 10 problem areas’ feed exists, but I still feel somewhat dissatisfied. I think that ‘some EAs prioritise longtermism, some prioritise neartermism or are neutral. 80K personally prioritises longtermism, and does so in this podcast feed, but doesn’t claim to speak on behalf of the movement and will point you elsewhere if you’re specifically interested in global health or animal welfare’ is a complex and nuanced point. I generally think it’s bad to try making complex and nuanced points in introductory material like this, and expect that most listeners who are actually new to EA wouldn’t pick up on that nuance.
I would feel better about this if the outro episode covered the same point, I think it’s easier to convey at the end of all this when they have some EA context, rather than at the start.
A concrete scenario to sketch out my concern:
Alice is interested in EA, and somewhat involved. Her friend Bob is interested in learning more, and Alice looks for intro materials. Because 80K is so prominent, Alice comes across ‘Effective Altruism: An Introduction’ first, and recommends this to Bob. Bob listens to the feed, and learns a lot, but because there’s so much content and Bob isn’t always paying close attention, Bob doesn’t remember all of it. Bob only has a vague memory of Episode 0 by the end, and leaves with a vague sense that EA is an interesting movement, but only cares about weird, abstract things rather than suffering happening today, and concludes that the movement has got a bit too caught up in clever arguments. And as a result, Bob decides not to engage further.
Hey Khorton, thanks for checking that. Initially I was puzzled by why I’d made this error but then I saw that “People could rate more than one area as the “top priority”. As a result the figures sum to more than 100%.
That survey design makes things a bit confusing, but the end result is that each of these votes can only be read as a vote for one of the top few priorities. — Rob
Hi,
I wrote the cause area EA Survey 2019 post for Rethink Priorities so thought I should just weigh in here on this minor point.
Fwiw, I think it’s more accurate to say 22% of respondents thought Global Poverty should be at least one of the top priorities, if not the only top priority, but when forced to only pick only one of five traditional cause areas to be the top priority, 32% chose Global Poverty.
The data shows 476 of the 2164 respondents (22%) who gave any top priority cause rating in that question selected Global Poverty and “this should be the top priority”. However, 356 of those 476 also selected another cause area as “this should be the top priority”, and 120 only selected Global Poverty for “this should be the top priority”. So 5.5% thought Global Poverty should be the ONLY top priority, and 16.5% thought Global Poverty should be one, among others, of the top priorities.
Also note that the subsequent question forced respondents to only chose one of five traditional cause areas as a top priority and there 32% (of 2023) chose Global Poverty.
Thanks for making this podcast feed! I have a few comments about what you said here:
I think if you are going to call this feed “Effective Altruism: An Introduction”, it doesn’t make sense to skew the selection towards longtermism so heavily. Maybe you should have phrased the feed as “An Introduction to Effective Altruism & Longtermism” given the current list of episodes.
In particular, I think it would be better if the Lewis Bollard episode was added, and one on Global Health & Dev’t, such as either the episode with Rachel Glennerster or James Snowden (which I liked).
If 80K wanted to limit the feed to 10 episodes, then that means 2 episodes would have to be taken out. As much as I like the episode with David Denkenberger, I don’t think learning about ALLFED is “core” to EA, so that’s one that I would have taken out. A 2nd episode to take out is a harder choice, but I would pick between taking one out among the episodes with Will MacAskill, Paul Christiano, or Hilary Greaves. I guess I would pick the one with Will, since I didn’t get much value from that episode, and I’m unsure if others would.
Alternatively, an easier solution is to expand the number of episodes in the feed to 12. 12 isn’t that much farther from 10.
I think it is important to include an episode on animal welfare and global health and development because
The EA movement does important work in these two causes
Many EAs still care about or work on these two causes, and would likely want more people to continue entering them
People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them, even when it could be, if they just learned more about animal welfare or global health and development.
As a broader point, when we introduce or talk about EA, especially with large reach (like 80K’s reach), I think it’s important to convey that the EA movement works on a variety of causes and worldviews.
Even from a longtermist perspective, I think the EA community is better the “broader” it is and the more it also includes work on other “non-longtermist” causes, such as global health and development and animal welfare. This way, the community can be bigger, and it’s probably easier to influence things for the long-term better the bigger the community is. For example, more people would be in government or in influential roles.
These are just my thoughts. I’m open to hearing others’ thoughts too!
Possible-bias disclosure: am longtermist, focused on x-risk.
I haven’t heard all of the podcast episodes under consideration, but methodologically I like the idea of there being a wide variety of ‘intro’ EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods. If there’s an unresolved disagreement about one of those things, I’d usually rather see people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products).
In this case, I’d rather see a new podcast episodes collection that’s more shorttermist and see whether a cohesive, useful playlist can be designed that way.
And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I’d like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what “arcs” they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.
Hi Rob, I also like the idea of “there being a wide variety of ‘intro’ EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods.”
However, it’s not easy for “people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products).” Most people do not have the brand or reach of 80,000 Hours.
It’s likely that only very popular figures in the EA community would get substantial reach if they made an Intro to EA collection, and it would still likely not be as large as the reach of 80,000 Hours’s. As such, 80,000 Hours’s choice of what Intro to EA resources to include is quite hard to compete with, and thus should ideally be more representative of what the community thinks.
80K will somewhat solve this problem themselves since they will create their own feed that exposes people to a wider variety of problems and topics, and possibly they could create a near-termist feed aside from that too. But I still think it would be better if what 80K marketed as an “Intro to EA” feed had more global health and dev’t and animal welfare content. I talk more about this here.
I do see that many hours probably went into picking the ten episodes. But it seems like 80K didn’t get enough feedback from more people (or a wider variety of people) before releasing this. Hence I’m giving my feedback this way, and judging from the upvotes, quite a few people agree with me.
Of course, I agree that more testing and re-listening could be done. But I would think that a significant % of people who get interested in EA, including quite a few people who are into longtermism, first get interested in global health and development or animal welfare, before getting interested in longtermism. And I think 80K might be losing out on these people with this feed.
I agree that that’s how I want the eventual decision to be made. I’m not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian’s or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.
This has some flavor of ‘X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I’ll defer to them’, which I think EAs generally say/think/do too often. It’s very easy to miss things even when you’ve worked on something for a while (esp. if it’s more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people’s reactions are explicitly part of what you’re optimizing for. (Obviously what we care about are new-people’s reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)
As with everything, there’s some risk of the opposite (‘not expecting enough of professionals?’), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it’s the opposite with experts outside of EA).
Meta: Rereading your comment, I think it’s more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it’s good to leave thoughts on possible interpretations of what people write.
Yeah, I endorse all of these things:
Criticizing 80K when you think they’re wrong (especially about object-level factual questions like “is longtermism true?”).
Criticizing EAs when you think they’re wrong even if you think they’ve spent hundreds of hours reaching some conclusion, or producing some artifact.
(I.e.: try to model how much thought and effort people have put into things, and keep in mind that no amount of effort makes you infallible. Even if it turns out the person didn’t make a mistake, raising the question of whether they messed up can help make it clearer why a choice was made.)
Using the comment section on a post like this to solicit interest in developing a competitor-podcast-episode-intro-resource.
Loudly advertising your competitor episode list here, so people can compare the merits of 80K’s playlist to yours.
The thing I don’t endorse is what I talk about in my comments.
Conversely, if the 80K intro podcast list was just tossed together in a few minutes without much concern for narrative flow / sequencing / cohesiveness, then I’m much less averse to redesign-via-quick-EA-Forum-comments. :)
[Just threading responses to different topics separately.] Regarding including Dave Denkenberger’s episode, the reason for that isn’t that alternative foods or disaster resilience are especially central EA problem areas.
Rather, in keeping with the focus on worldview and ‘how we think’, we put it in there as a demonstration of entrepreneurship, independent thinking, and general creativity. I can totally see how people could prefer that it be replaced with a different theme, but that was the reasoning. (We give the motivation for including each episode in their respective intros.) — Rob and Keiran
I understand—thanks for the clarification!
Ah, thanks for the clarification! That makes me feel less strongly about the lack of diversity. I interpreted it as prioritising ALLFED over global health stuff as representative of the work of the EA movement, which felt glaringly wrong
Let’s look at the three arguments for focusing more on shorttermist content:
1. The EA movement does important work in these two causes
I think this is basically a “this doesn’t represent members views” argument: “When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement”. Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:
people demanded EA Handbook 2.0 refocus away from longtermism, or
Bostrom’s excellent talk on crucial considerations was removed from effectivealtruism.org
it would have been better to focus on the merits of the ideas, rather than follow majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: “ask not what EA will do for you but what together we can do for utility”! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement’s members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear “this doesn’t represent members views”, we hear alarm bells ringing...
2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them.
Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience’s attention, so it’s necessary to focus on attracting those who can do the most good in priority areas.
It’s frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…
People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they’d probably find it off-putting).
In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything.
Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.
I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.
Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:
We’re presenting introductory material, and the resources are readers attention
B is popular with people who identify with the EA community
B is popular with people who are using logical arguments?
I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation—better to either (A) present the arguments, (e.g. arguments against Nick Beckstead’s thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people’s views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics—as a relative non-expert, I certainly didn’t. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.
I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.
There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.
A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2⁄3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”)
I place negligible weight on the fact that “the EA leaders forum is very long-termist” because (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors?
B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.
Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?
I like this comment! But I think I would actually go a step further:
I haven’t thought too hard about this, but I think I do actually dispute the expertise of the people Ryan listed. But that is nothing personal about them!
When I think of the term ‘expert’ I usually have people in mind who are building on decades of knowledge of a lot of different practitioners in their field. The field of global priorities has not existed long enough and has not developed enough depth to have meaningful expertise as I think of the term.
I am very happy to defer to experts if they have orders of magnitude more knowledge than me in a field. I will gladly accept the help of an electrician for any complex electrical problem despite the fact that I changed a light switch that one time.
But I don’t think that applies to global priorities for people who are already heavily involved in the EA community—the gap between these EAs and the global priorities ‘experts’ listed in terms of knowledge about global priorities is much, much smaller than between me and electrician about electrics. So it’s much less obvious whether it makes sense for these people to defer.
This is a really insightful comment.
The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
I agree that incentives within EA lean (a bit) longtermist. The incentives don’t come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden’s case, he switched due to a combination of “the force of the arguments” and being impressed with the quality of thought of some longtermists. For example, Holden writes “I’ve been particularly impressed with Carl Shulman’s reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell’s.” It’s reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the “incentive structure” as something that is monolithic, or that can explain away major (reasonable) changes.
B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I’m already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an “Introduction to prioritization”, and also, online conversation would happen on a “priorities forum”, and so on (or something similar). It’s tougher to ask people to switch unilaterally.
You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization.
I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :
Sounds to me like he’s thought about this stuff.
You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”
I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand?
OK, so essentially you don’t own up to strawmanning my views?
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
Just a quick point on this:
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
Thanks for sharing your thinking!
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.
Most of what you’ve written about the longtermist shift seems true to me, but I’d like to raise a couple of minor points:
Very few people ever clicked on the list of articles featured on the EA.org landing page (probably because the “Learn More” link was more prominent — it got over 10x the number of clicks as every article on that list combined). The “Learn More” link, in turn, led to an intro article that prominently featured global poverty, as well as a list of articles that included our introduction to global poverty. The site’s “Resources” page was also much more popular than the homepage reading list, and always linked to the global poverty article.
So while that article was mistakenly left out of one of EA.org’s three lists of articles, it was by far the least important of those lists, based on traffic numbers.
Do you happen to have numbers on when/how EA Global content topics shifted to be more longtermist? I wouldn’t be surprised if this change happened, but I don’t remember seeing anyone write it up, and the last few conferences (aside from EA Global: Reconnect, which had four total talks) seem to have a very balanced mix of content.
Hi Ryan,
On #1: I agree that we should focus on the merits of the ideas on what causes to prioritize. However, it’s not clear to me that longtermism has convincingly won the cause prioritization / worldview prioritization debate that it should be ~95% of an Intro to EA collection, which is what 80K’s feed is. I see CEA and OpenPhil as expert organizations on the same level as 80K’s expertise, and CEA and OpenPhil still prioritize GH&D and Animal Welfare substantially, so I don’t see why 80K’s viewpoint or the longtermist viewpoint should be given especially large deference to, so much so that ~95% of an Intro to EA collection is longtermist content.
On #2: My underlying argument is that I do see a lot of merit in the arguments that GH&D or animal welfare should be top causes. And I think people in the EA community are generally very smart and thoughtful, so I think it’s thoughtful and smart that a lot of EAs, including some leaders, prioritize GH&D and animal welfare. And I think they would have a lot of hesitations with the EA movement being drastically more longtermist than it already currently is, since that can lessen the number of smart, thoughtful people who get interested in and work on their cause, even if their cause has strong merits to be a top priority.
Which “experts” are you asking us to defer to? The people I find most convincing re: philosophy of giving are people like Will MacAskill or Ajeya Cotra, who are both spoken in favour of worldview diversification and moral uncertainty.
Hi Brian, (also Ula and Neel below),
Thanks for the feedback. These comments have prompted us to think about the issue some more — I won’t be able to get you a full answer before the weekend, but we’re working on a bigger picture solution to this which I hope will address some of your worries.
We should be able to fill you in on our plans next week. (ADDED: screw it we’ll just finish it tonight — posted as a new top level comment.)
— Rob
I strongly second all of this. I think 80K represents quite a lot of EAs public facing outreach, and that it’s important to either be explicit that this is longtermism focused, or to try to be representative of what happens in the movement as a whole. I think this especially holds for somewhat explicitly framed as an introductory resource, since I expect many people get grabbed by global health/animal welfare angles who don’t get grabbed by longtermist angles.
Though I do see the countervailing concern that 80K is strongly longtermism focused, and that it’d be disingenuous for an introduction to 80K to give disproportionate time to neartermist causes, if those are explicitly de-prioritised
Hi Neel thanks for these thoughts. I’ve responded to the broader issue in a new top-level comment.
Just on the point that we should be explicit that this is longtermism focused, while longtermism isn’t in the title I tried to make it pretty explicit in the series’ ‘Episode 0’:
There’s also this earlier on:
Thanks for the clarification. I’m glad that’s in there, and I’ll feel better about this once the ‘Top 10 problem areas’ feed exists, but I still feel somewhat dissatisfied. I think that ‘some EAs prioritise longtermism, some prioritise neartermism or are neutral. 80K personally prioritises longtermism, and does so in this podcast feed, but doesn’t claim to speak on behalf of the movement and will point you elsewhere if you’re specifically interested in global health or animal welfare’ is a complex and nuanced point. I generally think it’s bad to try making complex and nuanced points in introductory material like this, and expect that most listeners who are actually new to EA wouldn’t pick up on that nuance.
I would feel better about this if the outro episode covered the same point, I think it’s easier to convey at the end of all this when they have some EA context, rather than at the start.
A concrete scenario to sketch out my concern:
Alice is interested in EA, and somewhat involved. Her friend Bob is interested in learning more, and Alice looks for intro materials. Because 80K is so prominent, Alice comes across ‘Effective Altruism: An Introduction’ first, and recommends this to Bob. Bob listens to the feed, and learns a lot, but because there’s so much content and Bob isn’t always paying close attention, Bob doesn’t remember all of it. Bob only has a vague memory of Episode 0 by the end, and leaves with a vague sense that EA is an interesting movement, but only cares about weird, abstract things rather than suffering happening today, and concludes that the movement has got a bit too caught up in clever arguments. And as a result, Bob decides not to engage further.
In 2019, 22% of community members thought global poverty should be THE top priority; closer to 62% of people thought it should be one of several near-top priorities. https://forum.effectivealtruism.org/posts/8hExrLibTEgyzaDxW/ea-survey-2019-series-cause-prioritization
Hey Khorton, thanks for checking that. Initially I was puzzled by why I’d made this error but then I saw that “People could rate more than one area as the “top priority”. As a result the figures sum to more than 100%.
That survey design makes things a bit confusing, but the end result is that each of these votes can only be read as a vote for one of the top few priorities. — Rob
Hi, I wrote the cause area EA Survey 2019 post for Rethink Priorities so thought I should just weigh in here on this minor point.
Fwiw, I think it’s more accurate to say 22% of respondents thought Global Poverty should be at least one of the top priorities, if not the only top priority, but when forced to only pick only one of five traditional cause areas to be the top priority, 32% chose Global Poverty.
The data shows 476 of the 2164 respondents (22%) who gave any top priority cause rating in that question selected Global Poverty and “this should be the top priority”. However, 356 of those 476 also selected another cause area as “this should be the top priority”, and 120 only selected Global Poverty for “this should be the top priority”. So 5.5% thought Global Poverty should be the ONLY top priority, and 16.5% thought Global Poverty should be one, among others, of the top priorities. Also note that the subsequent question forced respondents to only chose one of five traditional cause areas as a top priority and there 32% (of 2023) chose Global Poverty.