Hey commenters — so as we mentioned we’ve been discussing internally what other changes we should make to address the concerns raised in the comments here, beyond creating the ‘ten problem areas’ feed.
We think the best change to make is to record a new episode with someone who is in favour of interventions that are ‘higher-evidence’, or that have more immediate benefits, and then insert that into the introduction series.
Our current interviews about e.g. animals or global development don’t make the case in favour of ‘short-termist’ approaches because the guests themselves aren’t focused on that level of problem prioritisation. That makes them an odd fit for the high-level nature of this series.
An episode focused on higher-evidence approaches has been on the (dismayingly long) list of topics we should cover for a while, but we can expedite it. We’ve got a shortlist of candidate guests to make this episode but would be very interested in getting nominations and votes from folks here to inform our choice.
(It’s slightly hard to say when we’ll be able to make this switch because we won’t be sure whether an interview will fit the bill until we’ve recorded it, but we can make it a priority for the next few months.)
Other good people to consider: Neil Buddy Shah (GiveWell), James Snowden (GiveWell), Alexander Berger (Open Phil), Zach Robinson (Open Phil), Peter Favorolo (Open Phil), Joey Savoie (Charity Entrepreneurship), Karolina Sarek (Charity Entrepreneurship)
Thanks for taking action on the feedback! I welcome this change and am looking forward to that new episode. Here’s 3 people I would nominate for that episode:
Tied as my top preference:
Peter Hurford—Since he has already volunteered to be interviewed anyway, and I don’t think Rethink Priorities’s work has been featured yet on the 80K podcast. They do research across animal welfare, global health and dev’t, meta, and long-termist causes, so seems like they do a lot of thinking about cause prioritization.
Joey Savoie—Since he has experience starting or helping start new charities in the near-termist space, and Charity Entrepreneurship hasn’t been prominently featured yet on the 80K podcast. And Joey probably leans more towards the neartermist side of things than Peter, since Rethink does some longtermist work, while CE doesn’t really yet.
2nd preference:
Neil Buddy Shah—Since he is now Managing Director at GiveWell, and has talked about animal welfare before too.
I could think of more names (i.e. the ones Peter listed), but I wanted to make a few strong recommendations like the ones I wrote above instead. I think one name missing on Peter’s list of people to consider interviewing is Michael Plant.
Thanks for somewhat engaging on this, but this response doesn’t adequately address the main objection I, and others, have been making: your so-called ‘introduction’ will still only cover your preferred set of object-level problems.
To emphasise, if you’re going to push your version of EA, call it ‘EA’, but ignore the perspectives of dedicated, sincere, thoughtful EAs just because you happen not to agree with them, that’s (1) insufficiently epistemically modest, (2) uncooperative, and (3) is going to (continue to) needlessly annoy a lot of people off, myself included.
I wonder if there might have been a misunderstanding. In previous comments, we’ve said that:
We’re adding an episode making the case for near termism, likely in place of the episode on alternative foods. While we want to keep it higher level, that episode is still likely to include more object-level material than e.g. Toby’s included episode does.
We’re going to swap Paul Christiano’s episode out for Ajeya Cotra, which is a mostly meta-level episode that includes coverage of the advantages of near-termism over longtermism.
We’re adding the ‘10 problem areas’ feed.
These changes will leave the ‘An Introduction’ series with very little object-level content at all, and most of it will be in Holden’s first episode, which covers a bit of everything.
That means there won’t be dedicated episodes to our traditional top priorities like AI, biosecurity, nuclear security, or extreme risks from climate change.
They’ll all instead be included on our ‘ten problems’ feed, along with global development, animal welfare, and other topics like journalism and earning-to-give.
Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of ‘if we don’t include words like “Animal” and “Poverty” in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes’.
An example of the latter would be ‘if we don’t include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they’ll be missing that specific key consideration’; or ‘the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added’.
I like arguments like this: (I) “I think long-termism is false, in ways that make a big difference for EAs’ career selection. Here’s a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I’ll continue to think this is a bad intro resource, and I’ll tell newbies to check out [X] instead.”
I think it’s fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others’ perspectives. But equally, I think it’s fine for people who disagree with 80K to try to convince 80K that they’re wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don’t like arguments like this: (II) “Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others’ beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist.”
There’s a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to ‘your podcast is bad because it says false things X and Y and Z and doesn’t discuss these counter-arguments to X and Y’, −1 to ‘your podcast is bad because it’s unrepresentative of coalitions A and B and C’.
I think the least contentious argument is that ‘an introduction’ should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn’t focus nearly exclusively on ‘your favourite ideology’. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing ‘an intro to communism’ you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as “an intro to longtermism”.
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you—you can frame this in terms of moral trade, if you want—sometimes you also need to support to include them. The way I’d like EA to work is “this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend”. This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is “this is what I believe, but I’m not going to tell what the alternatives are or what you should do if you disagree”. This isn’t an engagement in moral trade.
I’m pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren’t engaging in moral trade and so decide to embark on ‘moral trade wars’ against each other instead.
I apologise if this is just rank incompetence/inattention on my part as a forum reader, but I actually can’t find anything mentioning 1. or 2. in your comments on this thread, although I did see your note about 3. (I’ve done control-F for all the comments by “80000_Hours” and mentions of “Paul Christiano”, “Ajeya Cotra”, “Keiran”, and “Rob”. If I’ve missed them, and you provide a (digestible) hat, I will take a bite.)
In any case, the new structure seems pretty good to me - one series that deals with the ideas more or less in the abstract, another that gets into the object-level issues. I think that addresses my concerns but I don’t know exactly what you’re suggesting; I’d be interested to know exactly what the new list would be.
More generally, I’d be very happy to give you feedback on things (I’m not sure how to make this statement more precise, sorry). I would far prefer to be consulted in advance than feel I had to moan about it on the forum after the fact- this would also avoid conveying the misleading impression I don’t think you do a lot of excellent work, which I do think. But obviously, it’s up to you whose and how much input you solicit.
Hey commenters — so as we mentioned we’ve been discussing internally what other changes we should make to address the concerns raised in the comments here, beyond creating the ‘ten problem areas’ feed.
We think the best change to make is to record a new episode with someone who is in favour of interventions that are ‘higher-evidence’, or that have more immediate benefits, and then insert that into the introduction series.
Our current interviews about e.g. animals or global development don’t make the case in favour of ‘short-termist’ approaches because the guests themselves aren’t focused on that level of problem prioritisation. That makes them an odd fit for the high-level nature of this series.
An episode focused on higher-evidence approaches has been on the (dismayingly long) list of topics we should cover for a while, but we can expedite it. We’ve got a shortlist of candidate guests to make this episode but would be very interested in getting nominations and votes from folks here to inform our choice.
(It’s slightly hard to say when we’ll be able to make this switch because we won’t be sure whether an interview will fit the bill until we’ve recorded it, but we can make it a priority for the next few months.)
Thanks so much,
— Rob and Keiran
Other good people to consider: Neil Buddy Shah (GiveWell), James Snowden (GiveWell), Alexander Berger (Open Phil), Zach Robinson (Open Phil), Peter Favorolo (Open Phil), Joey Savoie (Charity Entrepreneurship), Karolina Sarek (Charity Entrepreneurship)
I’d be happy to make the case for why Rethink Priorities spends a lot of time researching neartermist topics.
Thanks for taking action on the feedback! I welcome this change and am looking forward to that new episode. Here’s 3 people I would nominate for that episode:
Tied as my top preference:
Peter Hurford—Since he has already volunteered to be interviewed anyway, and I don’t think Rethink Priorities’s work has been featured yet on the 80K podcast. They do research across animal welfare, global health and dev’t, meta, and long-termist causes, so seems like they do a lot of thinking about cause prioritization.
Joey Savoie—Since he has experience starting or helping start new charities in the near-termist space, and Charity Entrepreneurship hasn’t been prominently featured yet on the 80K podcast. And Joey probably leans more towards the neartermist side of things than Peter, since Rethink does some longtermist work, while CE doesn’t really yet.
2nd preference:
Neil Buddy Shah—Since he is now Managing Director at GiveWell, and has talked about animal welfare before too.
I could think of more names (i.e. the ones Peter listed), but I wanted to make a few strong recommendations like the ones I wrote above instead. I think one name missing on Peter’s list of people to consider interviewing is Michael Plant.
Thanks for somewhat engaging on this, but this response doesn’t adequately address the main objection I, and others, have been making: your so-called ‘introduction’ will still only cover your preferred set of object-level problems.
To emphasise, if you’re going to push your version of EA, call it ‘EA’, but ignore the perspectives of dedicated, sincere, thoughtful EAs just because you happen not to agree with them, that’s (1) insufficiently epistemically modest, (2) uncooperative, and (3) is going to (continue to) needlessly annoy a lot of people off, myself included.
Hi Michael,
I wonder if there might have been a misunderstanding. In previous comments, we’ve said that:
We’re adding an episode making the case for near termism, likely in place of the episode on alternative foods. While we want to keep it higher level, that episode is still likely to include more object-level material than e.g. Toby’s included episode does.
We’re going to swap Paul Christiano’s episode out for Ajeya Cotra, which is a mostly meta-level episode that includes coverage of the advantages of near-termism over longtermism.
We’re adding the ‘10 problem areas’ feed.
These changes will leave the ‘An Introduction’ series with very little object-level content at all, and most of it will be in Holden’s first episode, which covers a bit of everything.
That means there won’t be dedicated episodes to our traditional top priorities like AI, biosecurity, nuclear security, or extreme risks from climate change.
They’ll all instead be included on our ‘ten problems’ feed, along with global development, animal welfare, and other topics like journalism and earning-to-give.
Hope that clears things up,
— Rob and Keiran
Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of ‘if we don’t include words like “Animal” and “Poverty” in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes’.
An example of the latter would be ‘if we don’t include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they’ll be missing that specific key consideration’; or ‘the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added’.
I like arguments like this: (I) “I think long-termism is false, in ways that make a big difference for EAs’ career selection. Here’s a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I’ll continue to think this is a bad intro resource, and I’ll tell newbies to check out [X] instead.”
I think it’s fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others’ perspectives. But equally, I think it’s fine for people who disagree with 80K to try to convince 80K that they’re wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don’t like arguments like this: (II) “Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others’ beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist.”
There’s a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to ‘your podcast is bad because it says false things X and Y and Z and doesn’t discuss these counter-arguments to X and Y’, −1 to ‘your podcast is bad because it’s unrepresentative of coalitions A and B and C’.
I think the least contentious argument is that ‘an introduction’ should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn’t focus nearly exclusively on ‘your favourite ideology’. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing ‘an intro to communism’ you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as “an intro to longtermism”.
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you—you can frame this in terms of moral trade, if you want—sometimes you also need to support to include them. The way I’d like EA to work is “this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend”. This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is “this is what I believe, but I’m not going to tell what the alternatives are or what you should do if you disagree”. This isn’t an engagement in moral trade.
I’m pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren’t engaging in moral trade and so decide to embark on ‘moral trade wars’ against each other instead.
Hello Rob and Keiran,
I apologise if this is just rank incompetence/inattention on my part as a forum reader, but I actually can’t find anything mentioning 1. or 2. in your comments on this thread, although I did see your note about 3. (I’ve done control-F for all the comments by “80000_Hours” and mentions of “Paul Christiano”, “Ajeya Cotra”, “Keiran”, and “Rob”. If I’ve missed them, and you provide a (digestible) hat, I will take a bite.)
In any case, the new structure seems pretty good to me - one series that deals with the ideas more or less in the abstract, another that gets into the object-level issues. I think that addresses my concerns but I don’t know exactly what you’re suggesting; I’d be interested to know exactly what the new list would be.
More generally, I’d be very happy to give you feedback on things (I’m not sure how to make this statement more precise, sorry). I would far prefer to be consulted in advance than feel I had to moan about it on the forum after the fact- this would also avoid conveying the misleading impression I don’t think you do a lot of excellent work, which I do think. But obviously, it’s up to you whose and how much input you solicit.
FWIW these sound like fairly good changes to me. :)
(Also for reasons unrelated to the “Was the original selection ‘too longtermist’?” issue, on which I don’t mean to take a stand here.)