Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of ‘if we don’t include words like “Animal” and “Poverty” in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes’.
An example of the latter would be ‘if we don’t include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they’ll be missing that specific key consideration’; or ‘the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added’.
I like arguments like this: (I) “I think long-termism is false, in ways that make a big difference for EAs’ career selection. Here’s a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I’ll continue to think this is a bad intro resource, and I’ll tell newbies to check out [X] instead.”
I think it’s fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others’ perspectives. But equally, I think it’s fine for people who disagree with 80K to try to convince 80K that they’re wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don’t like arguments like this: (II) “Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others’ beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist.”
There’s a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to ‘your podcast is bad because it says false things X and Y and Z and doesn’t discuss these counter-arguments to X and Y’, −1 to ‘your podcast is bad because it’s unrepresentative of coalitions A and B and C’.
I think the least contentious argument is that ‘an introduction’ should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn’t focus nearly exclusively on ‘your favourite ideology’. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing ‘an intro to communism’ you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as “an intro to longtermism”.
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you—you can frame this in terms of moral trade, if you want—sometimes you also need to support to include them. The way I’d like EA to work is “this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend”. This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is “this is what I believe, but I’m not going to tell what the alternatives are or what you should do if you disagree”. This isn’t an engagement in moral trade.
I’m pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren’t engaging in moral trade and so decide to embark on ‘moral trade wars’ against each other instead.
Seems like a sad development if this is being done for symbolic or coalitional reasons, rather than for the sake of optimizing the specific topics covered in the episodes and the quality of the coverage.
An example of the former would be something along the lines of ‘if we don’t include words like “Animal” and “Poverty” in big enough print on this webpage, that will send the wrong message about how EAs in general feel about those causes’.
An example of the latter would be ‘if we don’t include argument X about animal welfare in one of the first five episodes somewhere, a lot of EA newbies will probably make worse decisions because they’ll be missing that specific key consideration’; or ‘the arguments in the first forty-five minutes of episode n are terrible because X and Y, so that episode should be cut or a rebuttal should be added’.
I like arguments like this: (I) “I think long-termism is false, in ways that make a big difference for EAs’ career selection. Here’s a set of compelling arguments against longtermism; until the 80K Podcast either refutes them to my satisfaction, or adds prominent discussion of them to this podcast episode list, I’ll continue to think this is a bad intro resource, and I’ll tell newbies to check out [X] instead.”
I think it’s fine if 80K disagrees, and I endorse them producing content that reflects their perspective (including the data they get from observing that other smart people disagree), rather than a political compromise between their perspective and others’ perspectives. But equally, I think it’s fine for people who disagree with 80K to try to convince 80K that they’re wrong about stuff like long-termism. If the debate looks broadly like that, then that seems good.
I don’t like arguments like this: (II) “Regardless of how likely you or I think it is that long-termism is false (either before or after updating on others’ beliefs), you should give lots of time to short-termism since a lot of EAs are short-termist.”
There’s a mix of both (I) and (II) in this comment section, so I want to praise the first thing at the same time that I anti-praise the second thing. +1 to ‘your podcast is bad because it says false things X and Y and Z and doesn’t discuss these counter-arguments to X and Y’, −1 to ‘your podcast is bad because it’s unrepresentative of coalitions A and B and C’.
I think the least contentious argument is that ‘an introduction’ should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn’t focus nearly exclusively on ‘your favourite ideology’. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing ‘an intro to communism’ you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as “an intro to longtermism”.
But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you—you can frame this in terms of moral trade, if you want—sometimes you also need to support to include them. The way I’d like EA to work is “this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend”. This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is “this is what I believe, but I’m not going to tell what the alternatives are or what you should do if you disagree”. This isn’t an engagement in moral trade.
I’m pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren’t engaging in moral trade and so decide to embark on ‘moral trade wars’ against each other instead.