Launching a new resource: ‘Effective Altruism: An Introduction’
Today we’re launching a new podcast feed that might be useful to you or someone you know.
It’s called Effective Altruism: An Introduction, and it’s a carefully chosen selection of ten episodes of The 80,000 Hours Podcast, with various new intros and outros to guide folks through them.
We think that it fills a gap in the introductory resources about effective altruism that are already out there. It’s a particularly good fit for people:
prefer listening over reading, or conversations over essays
have read about the big central ideas, but want to see how we actually think and talk
want to get a more nuanced understanding of how the community applies EA principles in real life — as an art rather than science.
The reason we put this together now, is that as the number of episodes of The 80,000 Hours Podcast show has grown, it has become less and less practical to suggest that new subscribers just ‘go back and listen through most of our archives.’
We hope EA: An Introduction will guide new subscribers to the best things to listen to first in order to quickly make sense of effective altruist thinking.
Across the ten episodes, we discuss:
What effective altruism at its core really is
The strategies for improving the world that are most popular within the effective altruism community, and why they’re popular
The key disagreements between researchers in the field
How to ‘think like an effective altruist’
How you might figure out how to make your biggest contribution to solving the world’s most pressing problems
At the end of each episode we suggest the interviews people should go to next if they want to learn more about each area.
If someone you know wants to get an understanding of what 80,000 Hours or effective altruism are all about, and audio content fits into their life better than long essays, hopefully this will prove a great resource to point them to.
It might also be a great fit for local groups who we’ve learned are already using episodes of the show for discussion groups.
Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well.
The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development. (ADDED: See more discussion of how we plan to deal with this issue here.)
We did seriously consider including episodes with Lewis Bollard and Rachel Glennister, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our ‘top problems’), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our ‘episode 0’, as well as the outro to Holden’s episode.
If things go well with this one, we may put together multiple curated feeds, likely differentiated by difficulty level, or cause area.
Folks can find it by searching for ‘effective altruism’ in their podcasting app.
We’re very open to feedback – comment here, or you can email us at podcast@80000hours.org.
— Rob and Keiran
Thanks for making this podcast feed! I have a few comments about what you said here:
I think if you are going to call this feed “Effective Altruism: An Introduction”, it doesn’t make sense to skew the selection towards longtermism so heavily. Maybe you should have phrased the feed as “An Introduction to Effective Altruism & Longtermism” given the current list of episodes.
In particular, I think it would be better if the Lewis Bollard episode was added, and one on Global Health & Dev’t, such as either the episode with Rachel Glennerster or James Snowden (which I liked).
If 80K wanted to limit the feed to 10 episodes, then that means 2 episodes would have to be taken out. As much as I like the episode with David Denkenberger, I don’t think learning about ALLFED is “core” to EA, so that’s one that I would have taken out. A 2nd episode to take out is a harder choice, but I would pick between taking one out among the episodes with Will MacAskill, Paul Christiano, or Hilary Greaves. I guess I would pick the one with Will, since I didn’t get much value from that episode, and I’m unsure if others would.
Alternatively, an easier solution is to expand the number of episodes in the feed to 12. 12 isn’t that much farther from 10.
I think it is important to include an episode on animal welfare and global health and development because
The EA movement does important work in these two causes
Many EAs still care about or work on these two causes, and would likely want more people to continue entering them
People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them, even when it could be, if they just learned more about animal welfare or global health and development.
As a broader point, when we introduce or talk about EA, especially with large reach (like 80K’s reach), I think it’s important to convey that the EA movement works on a variety of causes and worldviews.
Even from a longtermist perspective, I think the EA community is better the “broader” it is and the more it also includes work on other “non-longtermist” causes, such as global health and development and animal welfare. This way, the community can be bigger, and it’s probably easier to influence things for the long-term better the bigger the community is. For example, more people would be in government or in influential roles.
These are just my thoughts. I’m open to hearing others’ thoughts too!
Possible-bias disclosure: am longtermist, focused on x-risk.
I haven’t heard all of the podcast episodes under consideration, but methodologically I like the idea of there being a wide variety of ‘intro’ EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods. If there’s an unresolved disagreement about one of those things, I’d usually rather see people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products).
In this case, I’d rather see a new podcast episodes collection that’s more shorttermist and see whether a cohesive, useful playlist can be designed that way.
And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I’d like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what “arcs” they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.
Hi Rob, I also like the idea of “there being a wide variety of ‘intro’ EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods.”
However, it’s not easy for “people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products).” Most people do not have the brand or reach of 80,000 Hours.
It’s likely that only very popular figures in the EA community would get substantial reach if they made an Intro to EA collection, and it would still likely not be as large as the reach of 80,000 Hours’s. As such, 80,000 Hours’s choice of what Intro to EA resources to include is quite hard to compete with, and thus should ideally be more representative of what the community thinks.
80K will somewhat solve this problem themselves since they will create their own feed that exposes people to a wider variety of problems and topics, and possibly they could create a near-termist feed aside from that too. But I still think it would be better if what 80K marketed as an “Intro to EA” feed had more global health and dev’t and animal welfare content. I talk more about this here.
I do see that many hours probably went into picking the ten episodes. But it seems like 80K didn’t get enough feedback from more people (or a wider variety of people) before releasing this. Hence I’m giving my feedback this way, and judging from the upvotes, quite a few people agree with me.
Of course, I agree that more testing and re-listening could be done. But I would think that a significant % of people who get interested in EA, including quite a few people who are into longtermism, first get interested in global health and development or animal welfare, before getting interested in longtermism. And I think 80K might be losing out on these people with this feed.
I agree that that’s how I want the eventual decision to be made. I’m not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian’s or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.
This has some flavor of ‘X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I’ll defer to them’, which I think EAs generally say/think/do too often. It’s very easy to miss things even when you’ve worked on something for a while (esp. if it’s more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people’s reactions are explicitly part of what you’re optimizing for. (Obviously what we care about are new-people’s reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)
As with everything, there’s some risk of the opposite (‘not expecting enough of professionals?’), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it’s the opposite with experts outside of EA).
Meta: Rereading your comment, I think it’s more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it’s good to leave thoughts on possible interpretations of what people write.
Yeah, I endorse all of these things:
Criticizing 80K when you think they’re wrong (especially about object-level factual questions like “is longtermism true?”).
Criticizing EAs when you think they’re wrong even if you think they’ve spent hundreds of hours reaching some conclusion, or producing some artifact.
(I.e.: try to model how much thought and effort people have put into things, and keep in mind that no amount of effort makes you infallible. Even if it turns out the person didn’t make a mistake, raising the question of whether they messed up can help make it clearer why a choice was made.)
Using the comment section on a post like this to solicit interest in developing a competitor-podcast-episode-intro-resource.
Loudly advertising your competitor episode list here, so people can compare the merits of 80K’s playlist to yours.
The thing I don’t endorse is what I talk about in my comments.
Conversely, if the 80K intro podcast list was just tossed together in a few minutes without much concern for narrative flow / sequencing / cohesiveness, then I’m much less averse to redesign-via-quick-EA-Forum-comments. :)
[Just threading responses to different topics separately.] Regarding including Dave Denkenberger’s episode, the reason for that isn’t that alternative foods or disaster resilience are especially central EA problem areas.
Rather, in keeping with the focus on worldview and ‘how we think’, we put it in there as a demonstration of entrepreneurship, independent thinking, and general creativity. I can totally see how people could prefer that it be replaced with a different theme, but that was the reasoning. (We give the motivation for including each episode in their respective intros.) — Rob and Keiran
I understand—thanks for the clarification!
Ah, thanks for the clarification! That makes me feel less strongly about the lack of diversity. I interpreted it as prioritising ALLFED over global health stuff as representative of the work of the EA movement, which felt glaringly wrong
Let’s look at the three arguments for focusing more on shorttermist content:
1. The EA movement does important work in these two causes
I think this is basically a “this doesn’t represent members views” argument: “When leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement”. Clearly, to some extent, EA messaging has to cater to what current community-members think. But it is better to defer to reason, evidence, and expertise, to the greatest extent possible. For example, when:
people demanded EA Handbook 2.0 refocus away from longtermism, or
Bostrom’s excellent talk on crucial considerations was removed from effectivealtruism.org
it would have been better to focus on the merits of the ideas, rather than follow majoritarian/populist/democratic rule. Because the fundamental goal of the movement is to focus on the biggest problems, and everything else has to work around that, as the movement is not about us. As JFK once (almost) said: “ask not what EA will do for you but what together we can do for utility”! Personally, I think that by the time a prioritisation movement starts deciding its priorities by vote, it should have reached an existential crisis. Because it will take an awful lot of resources to convince the movement’s members of those priorities, which immediately raises the question of whether the movement could be outperformed by one that ditched the voting, for reason and evidence. To avoid the crisis, we could train ourselves, so that when we hear “this doesn’t represent members views”, we hear alarm bells ringing...
2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them / 3. People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them.
Right, but this is the same argument that would cause us to donate to a charity for guide dogs, because people want us to continue doing so. Just as with funds, there are limits to the audience’s attention, so it’s necessary to focus on attracting those who can do the most good in priority areas.
It’s frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…
People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they’d probably find it off-putting).
In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything.
Regarding the narrow issue of “Crucial Considerations” being removed from effectivealtruism.org, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.
I was just saying that if you have three interventions, whose relative popularity is A<B<C but whose expected impact, per a panel of EA experts was C<B<A, then you probably want EA orgs to allocate their resources C<B<A.
Is it an accurate summary to say that you think that sometimes we should allocate more resources to B if:
We’re presenting introductory material, and the resources are readers attention
B is popular with people who identify with the EA community
B is popular with people who are using logical arguments?
I agree that (1) carries some weight. For (2-3), it seems misguided to appeal to raw popularity among people who like rational argumentation—better to either (A) present the arguments, (e.g. arguments against Nick Beckstead’s thesis) (B) analyse who are the most accomplished experts in this field, and/or (C) consider how thoughtful people have changed their mind. The EA leaders forum is very long-termist. The most accomplished experts are even more so: Christiano, Macaskill, Greaves, Shulman, Bostrom, etc.. The direction of travel of people’s views is even more pro-longtermist. I doubt many of these people wanted to focus on unpopular, niche future topics—as a relative non-expert, I certainly didn’t. I publicly complained about the longtermist focus, until the force of the arguments (and meeting enough non-crazy longtermists) brought me around. If instead of considering (1-3), you consider (1,A-C), you end up wanting a strong longtermist emphasis.
I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.
There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.
A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2⁄3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”)
I place negligible weight on the fact that “the EA leaders forum is very long-termist” because (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.”
I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the EA.org landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors?
B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.
Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?
I like this comment! But I think I would actually go a step further:
I haven’t thought too hard about this, but I think I do actually dispute the expertise of the people Ryan listed. But that is nothing personal about them!
When I think of the term ‘expert’ I usually have people in mind who are building on decades of knowledge of a lot of different practitioners in their field. The field of global priorities has not existed long enough and has not developed enough depth to have meaningful expertise as I think of the term.
I am very happy to defer to experts if they have orders of magnitude more knowledge than me in a field. I will gladly accept the help of an electrician for any complex electrical problem despite the fact that I changed a light switch that one time.
But I don’t think that applies to global priorities for people who are already heavily involved in the EA community—the gap between these EAs and the global priorities ‘experts’ listed in terms of knowledge about global priorities is much, much smaller than between me and electrician about electrics. So it’s much less obvious whether it makes sense for these people to defer.
This is a really insightful comment.
The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan.
A: I didn’t say we should defer only to longtermist experts, and I don’t see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I’d just want to see the literature.
I agree that incentives within EA lean (a bit) longtermist. The incentives don’t come from a vacuum. They were set by grant managers, donors, advisors, execs, board members. Most worked on short-term issues at one time, as did at least some of Beckstead, Ord, Karnofsky, and many others. At least in Holden’s case, he switched due to a combination of “the force of the arguments” and being impressed with the quality of thought of some longtermists. For example, Holden writes “I’ve been particularly impressed with Carl Shulman’s reasoning: it seems to me that he is not only broadly knowledgeable, but approaches the literature that influences him with a critical perspective similar to GiveWell’s.” It’s reasonable to be moved by good thinkers! I think that you should place significant weight on the fact that a wide range of (though not all) groups of experienced and thoughtful EAs, ranging from GiveWell to CEA to 80k to FRI have evolved their views in a similar direction, rather than treating the “incentive structure” as something that is monolithic, or that can explain away major (reasonable) changes.
B: I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they’re experts in content selection, then great! But I think authenticity is a strong default.
Regarding naming, I think you might be preaching to the choir (or at least to the wrong audience): I’m already on-record as arguing for a coordinated shift away from the name EA to something more representative of what most community leaders believe. In my ideal universe, the podcast would be called an “Introduction to prioritization”, and also, online conversation would happen on a “priorities forum”, and so on (or something similar). It’s tougher to ask people to switch unilaterally.
You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization.
I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :
Sounds to me like he’s thought about this stuff.
You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”
I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”, effectivealtruism.org, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand?
OK, so essentially you don’t own up to strawmanning my views?
This could have been made clearer, but when I said that incentives come from incentive-setters thinking and being persuaded, the same applies to the choice of invitations to the EA leaders’ forum. And the leaders’ forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work byt a 4:1 ratio anyway.
Yes, Gates has thought about cause prio some, but he’s less engaged with it, and especially the cutting edge of it than many others.
You seem to have missed my point. My suggestion is to trust experts to identify the top priority cause areas, but not on what messaging to use, and to instead authentically present info on the top priorities.
You seem to have missed my point again. As I said, “It’s [tough] to ask people to switch unilaterally”. That is, when people are speaking to the EA community, and while the EA community is the one that exist, I think it’s tough to ask them not to use the EA name. But at some point, in a coordinated fashion, I think it would be good if multiple groups find a new name and start a new intellectual project around that new name.
Per my bolded text, I don’t get the sense that I’m being debated in good faith, so I ’ll try to avoid making further comments in this subthread.
Just a quick point on this:
If it’s a 4:1 ratio only, I don’t think that should mean that longtermist content should be ~95% of the content on 80K’s Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).
As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or “priorities”) community to work on neartermist work. It doesn’t have to be a winner-take-all approach to cause prioritization and what content we say is “EA”.
Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?
It seems like 80K wants to feature some neartermist content in their next collection, but I didn’t object to the current collection for the same reason I don’t object to e.g. pages on Giving What We Can’s website that focus heavily on global development (example): it’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.
Some examples of what I mean:
If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.
It may have been better for 80K to refer to their collection as an “introduction to global priorities” or an “introduction to longtermism” or something like that, but I also think it’s perfectly legitimate to use the term “EA: An Introduction”. Giving What We Can talks about “EA” but mostly presents it through examples from global development. 80K does the same but mostly talks about longtermism. EA Global is more balanced than either of those. No single one of these projects is going to dominate the world’s perception of what “EA” is, and I think it’s fine for them to be a bit different.
(I’m more concerned about balance in cases where something could dominate the world’s perception of what EA is — I’d have been concerned if Doing Good Better had never mentioned animal welfare. I don’t think that a collection of old podcast episodes, even from a pretty popular podcast, has the same kind of clout.)
Thanks for sharing your thinking!
I generally agree with the following statements you said:
“It’s good for EA-branded content to be fairly balanced on the whole, but that doesn’t mean that every individual project has to be balanced.”
“If EA Global had a theme like “economic growth” for one conference and 1⁄3 of the talks were about that, I think that could be pretty interesting, even if it wasn’t representative of community members’ priorities as a whole.
“Sometimes, I send out an edition of the EA Newsletter that is mostly development-focused, or AI-focused, because there happened to be a lot of good links about that topic that month. I think the newsletter would be worse if I felt compelled to have at least one article about every major cause area every month.”
Moreover, I know that a collection of old podcast episodes from 80K isn’t likely to dominate the world’s perception of what EA is. But I think it would benefit 80K and EA more broadly if they just included 1 or 2 episodes about near-termism. I think I and others would be more interested to share their collection to people as an intro to EA if there were 1 or 2 episodes about GH&D and animal welfare. Not having any makes me know that I’ll only share this to people interested in longtermism.
Anyway, I guess 80K’s decision to add one episode on near-termism is evidence that they think that neartermist interventions do merit some discussion within an “intro to EA” collection. And maybe they sense that more people would be more supportive of 80K and this collection if they did this.
If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point.
FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.
If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.
Thanks for everything you’ve done for the EA community! Good luck!
Oh come on, this is clearly unfair. I visited that group for a couple of months over seven years ago, because a trusted mentor recommended them. I didn’t find their approach useful, and quickly switched to working autonomously, on starting the EA Forum and EA Handbook v1. For the last 6-7 years, (many can attest that) I’ve discouraged people from working there! So what is the theory exactly?
I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.
In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one).
But you’re right: it was a mistake to mention that fact, and I’m sorry for doing so.
Most of what you’ve written about the longtermist shift seems true to me, but I’d like to raise a couple of minor points:
Very few people ever clicked on the list of articles featured on the EA.org landing page (probably because the “Learn More” link was more prominent — it got over 10x the number of clicks as every article on that list combined). The “Learn More” link, in turn, led to an intro article that prominently featured global poverty, as well as a list of articles that included our introduction to global poverty. The site’s “Resources” page was also much more popular than the homepage reading list, and always linked to the global poverty article.
So while that article was mistakenly left out of one of EA.org’s three lists of articles, it was by far the least important of those lists, based on traffic numbers.
Do you happen to have numbers on when/how EA Global content topics shifted to be more longtermist? I wouldn’t be surprised if this change happened, but I don’t remember seeing anyone write it up, and the last few conferences (aside from EA Global: Reconnect, which had four total talks) seem to have a very balanced mix of content.
Hi Ryan,
On #1: I agree that we should focus on the merits of the ideas on what causes to prioritize. However, it’s not clear to me that longtermism has convincingly won the cause prioritization / worldview prioritization debate that it should be ~95% of an Intro to EA collection, which is what 80K’s feed is. I see CEA and OpenPhil as expert organizations on the same level as 80K’s expertise, and CEA and OpenPhil still prioritize GH&D and Animal Welfare substantially, so I don’t see why 80K’s viewpoint or the longtermist viewpoint should be given especially large deference to, so much so that ~95% of an Intro to EA collection is longtermist content.
On #2: My underlying argument is that I do see a lot of merit in the arguments that GH&D or animal welfare should be top causes. And I think people in the EA community are generally very smart and thoughtful, so I think it’s thoughtful and smart that a lot of EAs, including some leaders, prioritize GH&D and animal welfare. And I think they would have a lot of hesitations with the EA movement being drastically more longtermist than it already currently is, since that can lessen the number of smart, thoughtful people who get interested in and work on their cause, even if their cause has strong merits to be a top priority.
Which “experts” are you asking us to defer to? The people I find most convincing re: philosophy of giving are people like Will MacAskill or Ajeya Cotra, who are both spoken in favour of worldview diversification and moral uncertainty.
Hi Brian, (also Ula and Neel below),
Thanks for the feedback. These comments have prompted us to think about the issue some more — I won’t be able to get you a full answer before the weekend, but we’re working on a bigger picture solution to this which I hope will address some of your worries.
We should be able to fill you in on our plans next week. (ADDED: screw it we’ll just finish it tonight — posted as a new top level comment.)
— Rob
I strongly second all of this. I think 80K represents quite a lot of EAs public facing outreach, and that it’s important to either be explicit that this is longtermism focused, or to try to be representative of what happens in the movement as a whole. I think this especially holds for somewhat explicitly framed as an introductory resource, since I expect many people get grabbed by global health/animal welfare angles who don’t get grabbed by longtermist angles.
Though I do see the countervailing concern that 80K is strongly longtermism focused, and that it’d be disingenuous for an introduction to 80K to give disproportionate time to neartermist causes, if those are explicitly de-prioritised
Hi Neel thanks for these thoughts. I’ve responded to the broader issue in a new top-level comment.
Just on the point that we should be explicit that this is longtermism focused, while longtermism isn’t in the title I tried to make it pretty explicit in the series’ ‘Episode 0’:
There’s also this earlier on:
Thanks for the clarification. I’m glad that’s in there, and I’ll feel better about this once the ‘Top 10 problem areas’ feed exists, but I still feel somewhat dissatisfied. I think that ‘some EAs prioritise longtermism, some prioritise neartermism or are neutral. 80K personally prioritises longtermism, and does so in this podcast feed, but doesn’t claim to speak on behalf of the movement and will point you elsewhere if you’re specifically interested in global health or animal welfare’ is a complex and nuanced point. I generally think it’s bad to try making complex and nuanced points in introductory material like this, and expect that most listeners who are actually new to EA wouldn’t pick up on that nuance.
I would feel better about this if the outro episode covered the same point, I think it’s easier to convey at the end of all this when they have some EA context, rather than at the start.
A concrete scenario to sketch out my concern:
Alice is interested in EA, and somewhat involved. Her friend Bob is interested in learning more, and Alice looks for intro materials. Because 80K is so prominent, Alice comes across ‘Effective Altruism: An Introduction’ first, and recommends this to Bob. Bob listens to the feed, and learns a lot, but because there’s so much content and Bob isn’t always paying close attention, Bob doesn’t remember all of it. Bob only has a vague memory of Episode 0 by the end, and leaves with a vague sense that EA is an interesting movement, but only cares about weird, abstract things rather than suffering happening today, and concludes that the movement has got a bit too caught up in clever arguments. And as a result, Bob decides not to engage further.
In 2019, 22% of community members thought global poverty should be THE top priority; closer to 62% of people thought it should be one of several near-top priorities. https://forum.effectivealtruism.org/posts/8hExrLibTEgyzaDxW/ea-survey-2019-series-cause-prioritization
Hey Khorton, thanks for checking that. Initially I was puzzled by why I’d made this error but then I saw that “People could rate more than one area as the “top priority”. As a result the figures sum to more than 100%.
That survey design makes things a bit confusing, but the end result is that each of these votes can only be read as a vote for one of the top few priorities. — Rob
Hi, I wrote the cause area EA Survey 2019 post for Rethink Priorities so thought I should just weigh in here on this minor point.
Fwiw, I think it’s more accurate to say 22% of respondents thought Global Poverty should be at least one of the top priorities, if not the only top priority, but when forced to only pick only one of five traditional cause areas to be the top priority, 32% chose Global Poverty.
The data shows 476 of the 2164 respondents (22%) who gave any top priority cause rating in that question selected Global Poverty and “this should be the top priority”. However, 356 of those 476 also selected another cause area as “this should be the top priority”, and 120 only selected Global Poverty for “this should be the top priority”. So 5.5% thought Global Poverty should be the ONLY top priority, and 16.5% thought Global Poverty should be one, among others, of the top priorities. Also note that the subsequent question forced respondents to only chose one of five traditional cause areas as a top priority and there 32% (of 2023) chose Global Poverty.
Hey commenters — so as we mentioned we’ve been discussing internally what other changes we should make to address the concerns raised in the comments here, beyond creating the ‘ten problem areas’ feed.
We think the best change to make is to record a new episode with someone who is in favour of interventions that are ‘higher-evidence’, or that have more immediate benefits, and then insert that into the introduction series.
Our current interviews about e.g. animals or global development don’t make the case in favour of ‘short-termist’ approaches because the guests themselves aren’t focused on that level of problem prioritisation. That makes them an odd fit for the high-level nature of this series.
An episode focused on higher-evidence approaches has been on the (dismayingly long) list of topics we should cover for a while, but we can expedite it. We’ve got a shortlist of candidate guests to make this episode but would be very interested in getting nominations and votes from folks here to inform our choice.
(It’s slightly hard to say when we’ll be able to make this switch because we won’t be sure whether an interview will fit the bill until we’ve recorded it, but we can make it a priority for the next few months.)
Thanks so much,
— Rob and Keiran
Other good people to consider: Neil Buddy Shah (GiveWell), James Snowden (GiveWell), Alexander Berger (Open Phil), Zach Robinson (Open Phil), Peter Favorolo (Open Phil), Joey Savoie (Charity Entrepreneurship), Karolina Sarek (Charity Entrepreneurship)
I’d be happy to make the case for why Rethink Priorities spends a lot of time researching neartermist topics.
Thanks for taking action on the feedback! I welcome this change and am looking forward to that new episode. Here’s 3 people I would nominate for that episode:
Tied as my top preference:
Peter Hurford—Since he has already volunteered to be interviewed anyway, and I don’t think Rethink Priorities’s work has been featured yet on the 80K podcast. They do research across animal welfare, global health and dev’t, meta, and long-termist causes, so seems like they do a lot of thinking about cause prioritization.
Joey Savoie—Since he has experience starting or helping start new charities in the near-termist space, and Charity Entrepreneurship hasn’t been prominently featured yet on the 80K podcast. And Joey probably leans more towards the neartermist side of things than Peter, since Rethink does some longtermist work, while CE doesn’t really yet.
2nd preference:
Neil Buddy Shah—Since he is now Managing Director at GiveWell, and has talked about animal welfare before too.
I could think of more names (i.e. the ones Peter listed), but I wanted to make a few strong recommendations like the ones I wrote above instead. I think one name missing on Peter’s list of people to consider interviewing is Michael Plant.
Thanks for somewhat engaging on this, but this response doesn’t adequately address the main objection I, and others, have been making: your so-called ‘introduction’ will still only cover your preferred set of object-level problems.
To emphasise, if you’re going to push your version of EA, call it ‘EA’, but ignore the perspectives of dedicated, sincere, thoughtful EAs just because you happen not to agree with them, that’s (1) insufficiently epistemically modest, (2) uncooperative, and (3) is going to (continue to) needlessly annoy a lot of people off, myself included.
Hi Michael,
I wonder if there might have been a misunderstanding. In previous comments, we’ve said that:
We’re adding an episode making the case for near termism, likely in place of the episode on alternative foods. While we want to keep it higher level, that episode is still likely to include more object-level material than e.g. Toby’s included episode does.
We’re going to swap Paul Christiano’s episode out for Ajeya Cotra, which is a mostly meta-level episode that includes coverage of the advantages of near-termism over longtermism.
We’re adding the ‘10 problem areas’ feed.
These changes will leave the ‘An Introduction’ series with very little object-level content at all, and most of it will be in Holden’s first episode, which covers a bit of everything.
That means there won’t be dedicated episodes to our traditional top priorities like AI, biosecurity, nuclear security, or extreme risks from climate change.
They’ll all instead be included on our ‘ten problems’ feed, along with global development, animal welfare, and other topics like journalism and earning-to-give.
Hope that clears things up,
— Rob and Keiran
Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of ‘if we don’t include words like “Animal” and “Poverty” in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes’.
An example of the latter would be ‘if we don’t include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they’ll be missing that specific key consideration’; or ‘the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added’.
I like arguments like this: (I) “I think long-termism is false, in ways that make a big difference for EAs’ career selection. Here’s a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I’ll continue to think this is a bad intro resource, and I’ll tell newbies to check out [X] instead.”
I think it’s fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others’ perspectives. But equally, I think it’s fine for people who disagree with 80K to try to convince 80K that they’re wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don’t like arguments like this: (II) “Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others’ beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist.”
There’s a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to ‘your podcast is bad because it says false things X and Y and Z and doesn’t discuss these counter-arguments to X and Y’, −1 to ‘your podcast is bad because it’s unrepresentative of coalitions A and B and C’.
I think the least contentious argument is that ‘an introduction’ should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn’t focus nearly exclusively on ‘your favourite ideology’. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing ‘an intro to communism’ you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as “an intro to longtermism”.
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you—you can frame this in terms of moral trade, if you want—sometimes you also need to support to include them. The way I’d like EA to work is “this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend”. This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is “this is what I believe, but I’m not going to tell what the alternatives are or what you should do if you disagree”. This isn’t an engagement in moral trade.
I’m pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren’t engaging in moral trade and so decide to embark on ‘moral trade wars’ against each other instead.
Hello Rob and Keiran,
I apologise if this is just rank incompetence/inattention on my part as a forum reader, but I actually can’t find anything mentioning 1. or 2. in your comments on this thread, although I did see your note about 3. (I’ve done control-F for all the comments by “80000_Hours” and mentions of “Paul Christiano”, “Ajeya Cotra”, “Keiran”, and “Rob”. If I’ve missed them, and you provide a (digestible) hat, I will take a bite.)
In any case, the new structure seems pretty good to me - one series that deals with the ideas more or less in the abstract, another that gets into the object-level issues. I think that addresses my concerns but I don’t know exactly what you’re suggesting; I’d be interested to know exactly what the new list would be.
More generally, I’d be very happy to give you feedback on things (I’m not sure how to make this statement more precise, sorry). I would far prefer to be consulted in advance than feel I had to moan about it on the forum after the fact- this would also avoid conveying the misleading impression I don’t think you do a lot of excellent work, which I do think. But obviously, it’s up to you whose and how much input you solicit.
FWIW these sound like fairly good changes to me. :)
(Also for reasons unrelated to the “Was the original selection ‘too longtermist’?” issue, on which I don’t mean to take a stand here.)
Per others: This selection isn’t really ‘leans towards a focus on longtermism’, but rather ‘almost exclusively focuses on longtermism’: roughly any ‘object level’ cause which isn’t longtermism gets a passing mention, whilst longtermism is the subject of 3⁄10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) ‘lean towards’ longtermism either in terms of subject matter or affinity.
Despite being a longtermist myself, I think this is dubious for a purported ‘introduction to EA as a whole’: EA isn’t all-but-exclusively longtermist in either corporate thought or deed.
Were I a more suspicious sort, I’d also find the ‘impartial’ rationales offered for why non-longtermist things keep getting the short (if not pointy) end of the stick scarcely credible:
The first episode with Karnofsky also covers longtermism and AI—at least as much as global health and animals. Yet this didn’t stop episodes on the specific cause areas of longtermism (Ord) and AI (Christiano) being included. Ditto the instance of “entrepreneurship, independent thinking, and general creativity” one wanted to highlight just-so-happens to be a longtermist intervention (versus, e.g. this).
TL;DR. I’m very substantially in agreement with Brian’s comment. I expand on those concerns, put them in stronger terms, then make a further point about how I’d like 80k to have more of a ‘public service broadcasting’ role. Because this is quite long, I thought it was better to have it as a new comment.
It strikes me as obviously inappropriate to describe the podcast series as “effective altruism: an introduction” when it focuses almost exclusively on a specific worldview—longtermism. The fact this objection is acknowledged, and that a “10 problems areas” series is also planned, doesn’t address it. In addition, and relatedly, it seems mistaken to produce and distribute such a narrow introduction to EA in the first place.
The point of EA is to work out how to do the most good, then do it. There are three target groups one might try to benefit - (1) (far) future lives, (2) near-term humans, (3) (near-term) animals. Given this, one cannot, in good faith, call something an ‘introduction’ when it focuses almost exclusively on object-level attempts to benefit just one group. At the very least, this does not seem to be in good faith when there is a substantial fraction of the EA community, and people who try to live by EA principles, who do prioritise each of three.
For people inside effective altruism who do not share 80k’s worldview, stating that this is an introduction runs the serious risk of conveying to those people that they are not “real EAs”, they are not welcome in the EA community, and their sincere and thoughtful labours and perspectives are unimportant. It does not seem adequately inclusive, welcoming, open-minded, and considerate—values EAs tend to endorse.
For people outside EA who are being introduced to the ideas for the first time, it genuinely fails to introduce them to the relevant possibilities of how they might do the most good, leaving them with a misleading impression of what EA is or can be. It would have been trivially easy to include the Bollard and Glennister interviews—or something else to represent those who focus on animals or humans in the near-term – and so indicate that those are credible altruistic paths and enthuse those who might take them.
By analogy, if someone taught an “introduction to political ideologies” course which glossed over conservatism and liberalism to focus primarily on (the merits of) socialism, you would assume they were either incompetent or pushing an agenda. Either way, if you hoped that they would cover all the material and do so in an even-handed manner, you would be disappointed.
Given this podcast series is not an introduction to effective altruism, it should not be called “effective altruism: an introduction”. More apt might be “effective longtermism: an introduction” or “80k’s opinionated introduction to effective altruism” or “effective altruism: 80k’s perspective”. In all cases, there should be more generous signposting of what the other points of view are and where they could be found.
A good introduction to EA would, at the very least, include a wide range of steel-manned positions about how to do the most good that are held by sincere, thoughtful, individuals aspiring to do the most good. I struggle to see why someone would produce such a narrow introduction unless they thought those holding alternative views were errant and irrelevant fools.
I can imagine someone defending 80k by saying that this is their introduction to effective altruism and there’s nothing to stop someone else writing their own and sharing it (note RobBesinger does this below).
While this is technically true, I do not find it compelling for the following reason. In a cooperative altruistic community, you want to have a division, rather than a duplication, of labour, where people specialise in different tasks. 80k has become, in practice, the primary source of introductory materials to EA: it is the single biggest channel by which people are introduced to effective altruism, with 17% of EA survey respondents saying they first heard about EA through it; it produces much of the introductory content individuals read or listen to. 80k may not have a monopoly on telling people about EA, but it is something like the ‘market leader’.
The way I see it, given 80k’s dominant position, they should fulfil something like a public service broadcasting role for EA, where they strive to be impartial, inclusive, and informative (https://en.wikipedia.org/wiki/Public_broadcasting).
Why? Because they are much better placed to do it than anyone else! In terms any 80k reader will be familiar with, 80k should do this because it is their comparative advantage and they are not easily replaced. Their move to focusing on longtermism has left a gap. A new organisation, Probably Good, has recently stepped into this gap to provide more cause neutral careers advice but I see it as cause for regret that this had to happen.
While I think it would be a good idea if 80k had more of a public service broadcasting model, I don’t expect this to happen, seeing as they’ve consciously moved away from it. It does, however, seem feasible for 80k to be a bit more inclusive—in this case, one very easy thing would be to expand the list from 10 to 12 items so concerns for animals and near-term humans feature. It would be a huge help to non-longtermist EAs that 80ks talks about them a bit (more), and it would be a small additional cost to 80k.
This is a specific way of framing EA, and one that I think feels natural in part for ‘sociology and history of EA’ reasons: individual EAs often self-identify as either interested in existential risk, interested in animal welfare, or interested in third-world development, in large part due to the early influence of Peter Singer, GiveWell, LessWrong, and the Oxford longtermists, who broke in different directions on these questions. The EA Funds use a division like this, and early writing about EA liked to emphasize this division.
But I don’t agree that this is the most natural (much less the only reasonable) way of dividing up the space of high-impact altruistic goals or projects, so I don’t think all intro resources should emphasize this framing.
If you’d framed EA as being about ‘(1) causing positive experiences and (2) preventing negative ones’, you could have argued that EA is about the choice between negative-leaning and positive-leaning utilitarianism, and that all intro resources must put similar emphasis on those two perspectives (regardless of the merits of the perspectives in the eyes of the intro-resource-maker).
If you’d framed EA as being about ‘direct aid, institution reform, cause prioritization, and improving EAs’ effectiveness’, you could argue that any intro resource is obviously bad if it neglects any one of those categories, even if it’s just because they’re carving up the space differently.
If you’d framed EA as being about ‘helping people in the developed world, helping people in the developing world, helping animals, or helping far-future lives’, then we’d have needed to give equal prominence to more nationalist and regionalist perspectives on altruism as well.
My main objection is to the structure of this argument. There are worlds where EA initially considered it an open question whether nationalism is a reasonable perspective to bring to cause prioritization; and worlds where lots of EAs later realized they were wrong and nationalism isn’t a good perspective. In those worlds, it’s important that we not be so wedded to early framings of ‘the key disagreements in the movement’ that no one can ever move on from treating nationalist-EA as a contender.
(This isn’t intended as an argument for ‘our situation is analogous to the nationalism one’; it’s intended as a structural objection to arguments that take for granted a certain framing of EA, require all intro sources to fit that frame, and make it hard to update away from that frame in worlds where some of the options do turn out to be bad.)
Hi Rob, I agree with your and Ryan’s point that the poverty/animals/future split is something that evolved because of EA’s history, and I can imagine a world with different categories of cause areas.
But something that I keep seeing missed is this:
I’m really troubled by any “Introduction to EA” that suggests EA is about long-termism. A brief intro saying “by the way, some people have different views to the following 20 hours of content!” is not sufficient. This should be relabelled as an intro to EA long-termism if it remains in its current form.
Although I understand the nationalism example isn’t meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous.
If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively ‘moved on’ from these, contemporary introductions to the field shouldn’t feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.
Yet, however you slice it, EA as it stands now hasn’t by-and-large ‘moved on’ to be ‘basically longtermism’, where its interest in (e.g) global health is clearly atavistic. I’d be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which ‘greatly favouring longtermism over everything else’ exceeds.
How you choose to frame an introduction is up for grabs, and I don’t think ‘the big three’ is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.
This isn’t the main problem I had in mind, but it’s worth noting that EA animal advocacy is also aimed at improving welfare and/or preventing suffering in future minds, even when it’s not aimed at far-future animals. The goal of factory farm reform for chickens is to affect (or prevent) future chickens, not chickens that are alive at the time people develop or push for the reform.
Hey Brian, Ula, and other commenters,
Thanks again for all the feedback! To what extent each piece of content closely associated with EA should aim to be ‘representative’ is a vexed issue that folks are going to continue to have different views on, and we can’t produce something that’s ideal to everyone simultaneously.
Fortunately in this case I think there’s a change we can make that will be an improvement from everyone’s perspective.
We had planned to later make another collection that would showcase a wider variety of things that EAs are up to. Given your worries combined with the broader enthusiasm for the underlying concept, it seems like we should just do that as soon as it’s practical for Keiran and me to put it together.
That feed would be called something like ‘Effective Altruism: Ten Problem Areas’ and feature Bollard and Glennerster, and other guests on topics like journalism, climate change, pandemics, earning to give, and a few others which we’ll think about.
We’ll promote it similarly — and cross-promote between the two collections — so anyone who wants to learn about those problem areas will end up doing so.
(Independently we also realised that we should sub Ajeya’s episode into ‘An Introduction’. That only didn’t happen the first time around because we settled on this list of ten in 2020 before Ajeya’s episode existed. Ajeya’s interview will be more neutral about longtermism than what it replaces.)
Speaking personally as Rob (because I know other people at 80,000 Hours have different perspectives), I favour a model where there are a range of varied introductory materials, some of which lean towards a focus on poverty, some towards animals, some towards longtermism, some with other angles, and still others that aims to be representative.
In any case, after this reshuffle we’ll have two feeds for you — one that leans into the way we think about things at 80,000 Hours, and another that shows off the variety of causes prioritised by EAs.
Folks can then choose whichever one they would rather share, or listen to themselves. (And fingers crossed many people will opt to listen to both!)
Look forward to hearing your thoughts,
— Rob and Keiran
Hi Rob and Keiran, thanks for the quick response! I agree that this is a difficult issue. Thanks for letting us know about that 2nd feed with a wider variety of things that EAs are up to. I think that’s a good thing to have.
Even with that 2nd feed though, I think it would still be better if the “Effective Altruism: An Introduction Feed” had the Lewis Bollard episode and an episode on global health and dev’t, whether by substituting episodes or expanding it to 12 episodes. I don’t want to make this into a big debate, but I want to share my point of view below.
Because the feed is marketed as something “to help listeners quickly get up to speed on the school of thought known as effective altruism”, and because of 80K’s wide reach, I think some people seeing this list or listening to this feed may have a misrepresentative view of what EA is. Specifically, they might think we are more longtermist than the community really is, or be expected to lean longtermist.
Also, all or most of the popular “Intro to EA” resources or collections out there at least give a substantial part on global health and dev’t and animal welfare, such as the Intro to EA on the EA website, the Intro EA Fellowship syllabus created by EA Oxford (with input from CEA and other EAs), and Will MacAskill’s TED talk. And CEA still makes sure to include GH&D (Global Health and Dev’t) and Animal Welfare content substantially in their EA conferences.
All of these are reflections that the community still prioritizes these two causes a lot. I know that key leaders of EA do lean longtermist, as seen in 80K’s key ideas page, or some past leaders forum surveys, or how 3-4 weeks of the Intro EA Fellowship syllabus are on longtermist-related content, while only 1-2 weeks are on GH&D, and 1 week on animal welfare / moral circle expansion.
I’m fine with the community and the resources leaning to be longtermist, since I do generally agree with longtermism. But I don’t think “Intro to EA” resources or collections like 80K’s feed should only have snippets/intros/outros of GH&D and animal welfare content, and then be ~95% longtermist content.
Of course, people consuming your feed who are interested in global health and dev’t and animal welfare could listen to your episode 0/intros/outros, or find other podcast episodes that interest them through your website. But I worry about a larger trend here of GH&D and animal welfare content being drastically lessened, and people interested in these causes feeling more and more alienated from the EA community.
I think 80K has some significant power/effect in influencing the EA community and its culture. So I think when 80K decides to reshape the way effective altruism is introduced to be ~95% longtermist content, it could possibly influence the community significantly in ways that people not interested in or working on longtermism would not want, including leaders in the EA community who work on non-longtermist causes.
I’d understand if 80K still decides not to include an episode on GH&D and animal welfare into your Intro to EA feed, since you’re free to do what you want to do, but I hope I laid out some arguments on why that might be a bad decision.
It’s a bit time-consuming and effortful to write these, so I hope this doesn’t blow up into a huge debate or something like that. Just trying to offer my point of view, hoping that it helps!
Since I come from the EAA side of effective altruism I feel like Lewis Bollard’s podcast is really missing here. I would dearly appreciate it, if when you use the term “Effective Altruism: An Introduction” there was EAA representation included in the introductory materials, especially that in the countries like Poland (where I am from) EA-mind folks are mostly coming from the animal movement and are drawn to EA because of effective animal advocacy.
Or maybe just worth re-naming to: 80,000 Hours Introduction to Effective Altruism?
Hi all of the commenters here — thanks again for all the further thoughts on the compilation.
We’re in the process of discussing your feedback internally and figuring out whether to make any further changes, and if so what they should be. We don’t want to rush that, but will get back to you as soon as we can.
— Rob and Keiran