Re your final point, I mostly just think they miss the mark by not really addressing the question of what the long-term distribution of animal welfare looks like (I’m personally pretty surprised by the comparative lack of discussion about how likely our Lightcone is to be net bad by the lights of people who put significant weight on animal welfare)
Maybe I’m getting mixed up, but weren’t we talking about convincing people who believe that “the future world will likely contain far less factory farming and many more happy humans”? (I.e., the people for whom the long-term distribution of animal welfare is, by assumption, not that much of a worry)
Maybe you had in mind the people who (a) significantly prioritize animal welfare, and (b) think the long-term future will be bad due to animal welfare issues? Yeah, I’d also like to see more good content for these people. (My sense is there’s been a decent amount of discussion, but it’s been kind of scattered (which also makes it harder to feature in a curriculum). Maybe you’ve already seen all this, but I personally found section 1.2 of the GPI agenda helpful as a compilation of this discussion.)
Ah sorry, the original thing was badly phrased. I meant, a valid objection to x-risk work might be “I think that factory farming is really really bad right now, and prioritise this over dealing with x-risk”. And if you don’t care about the distant future, that argument seems pretty legit from some moral perspectives? While if you do care about the distant future, you need to answer the question of what the future distribution of animal welfare looks like, and it’s not obviously positive. So to convince these people you’d need to convince them that the distribution is positive.
Re your final point, I mostly just think they miss the mark by not really addressing the question of what the long-term distribution of animal welfare looks like (I’m personally pretty surprised by the comparative lack of discussion about how likely our Lightcone is to be net bad by the lights of people who put significant weight on animal welfare)
Maybe I’m getting mixed up, but weren’t we talking about convincing people who believe that “the future world will likely contain far less factory farming and many more happy humans”? (I.e., the people for whom the long-term distribution of animal welfare is, by assumption, not that much of a worry)
Maybe you had in mind the people who (a) significantly prioritize animal welfare, and (b) think the long-term future will be bad due to animal welfare issues? Yeah, I’d also like to see more good content for these people. (My sense is there’s been a decent amount of discussion, but it’s been kind of scattered (which also makes it harder to feature in a curriculum). Maybe you’ve already seen all this, but I personally found section 1.2 of the GPI agenda helpful as a compilation of this discussion.)
Ah sorry, the original thing was badly phrased. I meant, a valid objection to x-risk work might be “I think that factory farming is really really bad right now, and prioritise this over dealing with x-risk”. And if you don’t care about the distant future, that argument seems pretty legit from some moral perspectives? While if you do care about the distant future, you need to answer the question of what the future distribution of animal welfare looks like, and it’s not obviously positive. So to convince these people you’d need to convince them that the distribution is positive.