I wonder if there might have been a misunderstanding. In previous comments, we’ve said that:
We’re adding an episode making the case for near termism, likely in place of the episode on alternative foods. While we want to keep it higher level, that episode is still likely to include more object-level material than e.g. Toby’s included episode does.
We’re going to swap Paul Christiano’s episode out for Ajeya Cotra, which is a mostly meta-level episode that includes coverage of the advantages of near-termism over longtermism.
We’re adding the ‘10 problem areas’ feed.
These changes will leave the ‘An Introduction’ series with very little object-level content at all, and most of it will be in Holden’s first episode, which covers a bit of everything.
That means there won’t be dedicated episodes to our traditional top priorities like AI, biosecurity, nuclear security, or extreme risks from climate change.
They’ll all instead be included on our ‘ten problems’ feed, along with global development, animal welfare, and other topics like journalism and earning-to-give.
Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of ‘if we don’t include words like “Animal” and “Poverty” in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes’.
An example of the latter would be ‘if we don’t include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they’ll be missing that specific key consideration’; or ‘the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added’.
I like arguments like this: (I) “I think long-termism is false, in ways that make a big difference for EAs’ career selection. Here’s a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I’ll continue to think this is a bad intro resource, and I’ll tell newbies to check out [X] instead.”
I think it’s fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others’ perspectives. But equally, I think it’s fine for people who disagree with 80K to try to convince 80K that they’re wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don’t like arguments like this: (II) “Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others’ beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist.”
There’s a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to ‘your podcast is bad because it says false things X and Y and Z and doesn’t discuss these counter-arguments to X and Y’, −1 to ‘your podcast is bad because it’s unrepresentative of coalitions A and B and C’.
I think the least contentious argument is that ‘an introduction’ should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn’t focus nearly exclusively on ‘your favourite ideology’. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing ‘an intro to communism’ you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as “an intro to longtermism”.
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you—you can frame this in terms of moral trade, if you want—sometimes you also need to support to include them. The way I’d like EA to work is “this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend”. This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is “this is what I believe, but I’m not going to tell what the alternatives are or what you should do if you disagree”. This isn’t an engagement in moral trade.
I’m pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren’t engaging in moral trade and so decide to embark on ‘moral trade wars’ against each other instead.
I apologise if this is just rank incompetence/inattention on my part as a forum reader, but I actually can’t find anything mentioning 1. or 2. in your comments on this thread, although I did see your note about 3. (I’ve done control-F for all the comments by “80000_Hours” and mentions of “Paul Christiano”, “Ajeya Cotra”, “Keiran”, and “Rob”. If I’ve missed them, and you provide a (digestible) hat, I will take a bite.)
In any case, the new structure seems pretty good to me - one series that deals with the ideas more or less in the abstract, another that gets into the object-level issues. I think that addresses my concerns but I don’t know exactly what you’re suggesting; I’d be interested to know exactly what the new list would be.
More generally, I’d be very happy to give you feedback on things (I’m not sure how to make this statement more precise, sorry). I would far prefer to be consulted in advance than feel I had to moan about it on the forum after the fact- this would also avoid conveying the misleading impression I don’t think you do a lot of excellent work, which I do think. But obviously, it’s up to you whose and how much input you solicit.
Hi Michael,
I wonder if there might have been a misunderstanding. In previous comments, we’ve said that:
We’re adding an episode making the case for near termism, likely in place of the episode on alternative foods. While we want to keep it higher level, that episode is still likely to include more object-level material than e.g. Toby’s included episode does.
We’re going to swap Paul Christiano’s episode out for Ajeya Cotra, which is a mostly meta-level episode that includes coverage of the advantages of near-termism over longtermism.
We’re adding the ‘10 problem areas’ feed.
These changes will leave the ‘An Introduction’ series with very little object-level content at all, and most of it will be in Holden’s first episode, which covers a bit of everything.
That means there won’t be dedicated episodes to our traditional top priorities like AI, biosecurity, nuclear security, or extreme risks from climate change.
They’ll all instead be included on our ‘ten problems’ feed, along with global development, animal welfare, and other topics like journalism and earning-to-give.
Hope that clears things up,
— Rob and Keiran
Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of ‘if we don’t include words like “Animal” and “Poverty” in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes’.
An example of the latter would be ‘if we don’t include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they’ll be missing that specific key consideration’; or ‘the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added’.
I like arguments like this: (I) “I think long-termism is false, in ways that make a big difference for EAs’ career selection. Here’s a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I’ll continue to think this is a bad intro resource, and I’ll tell newbies to check out [X] instead.”
I think it’s fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others’ perspectives. But equally, I think it’s fine for people who disagree with 80K to try to convince 80K that they’re wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don’t like arguments like this: (II) “Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others’ beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist.”
There’s a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to ‘your podcast is bad because it says false things X and Y and Z and doesn’t discuss these counter-arguments to X and Y’, −1 to ‘your podcast is bad because it’s unrepresentative of coalitions A and B and C’.
I think the least contentious argument is that ‘an introduction’ should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn’t focus nearly exclusively on ‘your favourite ideology’. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing ‘an intro to communism’ you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as “an intro to longtermism”.
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you—you can frame this in terms of moral trade, if you want—sometimes you also need to support to include them. The way I’d like EA to work is “this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend”. This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is “this is what I believe, but I’m not going to tell what the alternatives are or what you should do if you disagree”. This isn’t an engagement in moral trade.
I’m pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren’t engaging in moral trade and so decide to embark on ‘moral trade wars’ against each other instead.
Hello Rob and Keiran,
I apologise if this is just rank incompetence/inattention on my part as a forum reader, but I actually can’t find anything mentioning 1. or 2. in your comments on this thread, although I did see your note about 3. (I’ve done control-F for all the comments by “80000_Hours” and mentions of “Paul Christiano”, “Ajeya Cotra”, “Keiran”, and “Rob”. If I’ve missed them, and you provide a (digestible) hat, I will take a bite.)
In any case, the new structure seems pretty good to me - one series that deals with the ideas more or less in the abstract, another that gets into the object-level issues. I think that addresses my concerns but I don’t know exactly what you’re suggesting; I’d be interested to know exactly what the new list would be.
More generally, I’d be very happy to give you feedback on things (I’m not sure how to make this statement more precise, sorry). I would far prefer to be consulted in advance than feel I had to moan about it on the forum after the fact- this would also avoid conveying the misleading impression I don’t think you do a lot of excellent work, which I do think. But obviously, it’s up to you whose and how much input you solicit.
FWIW these sound like fairly good changes to me. :)
(Also for reasons unrelated to the “Was the original selection ‘too longtermist’?” issue, on which I don’t mean to take a stand here.)