I appreciate this post and think you make broadly reasonable arguments!
As the cited example of “screw longtermism”, I feel I should say that my crux here is mostly that I think AI x risk is just actually really important, and that given this, that longtermism is bad marketing and unnecessarily exclusionary.
It’s exclusionary because it’s a niche and specific philosophical position that has some pretty unsavoury conclusions, and is IMO incredibly paralysing and impractical if you AREN’T trying to minimise x risk. I think that if framed right, “make sure AI does what we want especially as it gets far more capable” is just an obviously correct thing to want, and I think the movement already has a major PR problem among natural allies (see, eg Timnit Gebru’s Twitter) that this kind of thing exacerbates
It’s bad marketing because it’s easily conflated with neglecting people alive today, Pascal’s Mugging, naive utilitarianism, strong longtermism, etc. And I often see people mocking EA or AI Safety who point to the same obvious weakness of “if there’s just a. 0001% chance of it being really bad we should drop everything else to fix it”. I think this is actually a pretty valid argument to mock!
And even for people who avoid that trap, it seems pretty patronising to me to frame “caring about future people” as an important and cruxy moral insight—in practice the distinguishing thing about EA is our empirical beliefs about killer robots!
I am admittedly also biased because I find most moral philosophy debates irritating, and think that EAs as a whole spend far too much time on them rather than actually doing things!
There seems to be tension in your comment here. You’re claiming both that longtermism is a niche and specific philosophical position but also that it’s patronising to point out to people.
Perhaps you’re pointing to some hard trade-off? Like, if you make the full argument, it’s paralysing and impractical, but if you just state the headline, it’s obvious? That strikes me as a bit of a double-strawman—you can explain the idea in varying levels of depth depending on the context.
I don’t think longtermism need be understood as a niche and specific philosophical position and discussion about longtermism doesn’t need to engage in complex moral philosophy, but I agree that it’s often framed this way (in the wrong contexts) and that this is bad for the reasons you point to. I think the first chapter of What We Owe the Future gets this balance right, and it’s probably my favourite explanation of longtermism.
I disagree that most people already buy its core claim, which I think is more like “protecting the long-term future of humanity is extremely important and we’re not doing it” and not just “we should care about future people”. I think many people do “care” in the latter way but aren’t sincerely engaging with the implications of that.
think that EAs as a whole spend far too much time on them rather than actually doing things!
Yeah, fair point, I’m conflating two things here. Firstly, strong longtermism/total utilitarianism, or the slightly weaker form of “the longterm future is overwhelmingly important, and mostly dominates short term considerations”, is what I’m calling the niche position. And “future people matter and we should not only care about people alive today” is the common sense patronising position. These are obviously very different things!
In practice, my perception of EA outreach is that it mostly falls into one of those buckets? But this may be me being uncharitable. WWOTF is definitely more nuanced than this, but I mostly just disagree with its message because I think it significantly underrates AI.
I do think that the position of “the longterm future matters a lot, but not overwhelmingly, but is significantly underrated/under invested in today” is reasonable and correct and falls in neither of those extremes. And I would be pro most of society agreeing with it! I just think that the main way that seems likely to robustly affect the longterm future is x risk reduction, and that the risk is high enough that this straightforwardly makes sense from common sense morality.
All makes sense, I agree it’s usually one of those two things and that the wrong one is sometimes used.
Yeah, I think that last sentence is where we disagree. I think it’s a reasonable view that I’d respond to with something like my “our situation could change” or “our priorities could change”. But I’m glad not everyone is taking the same approach and think we should make both of these (complimentary) cases :)
I am admittedly also biased because I find most moral philosophy debates irritating, and think that EAs as a whole spend far too much time on them rather than actually doing things!
I’d say the biggest red flag for moral philosophy is that it still uses intuition both as a hypothesis generator and reliable evidence, when it’s basically worthless for conclusions to accept. Yet that’s the standard moral philosophy is in. It’s akin to the pre-science era of knowledge.
That’s why it’s so irritating.
So I can draw 2 conclusions from that:
Mind independent facts about morality are not real, in the same vein as identity is not real (controversially, consciousness probably is this.)
There is a reality, but moral philosophy needs to be improved.
And I do think it’s valuable for EA to do this, if only to see whether there is a reality at the end of it all.
I appreciate this post and think you make broadly reasonable arguments!
As the cited example of “screw longtermism”, I feel I should say that my crux here is mostly that I think AI x risk is just actually really important, and that given this, that longtermism is bad marketing and unnecessarily exclusionary.
It’s exclusionary because it’s a niche and specific philosophical position that has some pretty unsavoury conclusions, and is IMO incredibly paralysing and impractical if you AREN’T trying to minimise x risk. I think that if framed right, “make sure AI does what we want especially as it gets far more capable” is just an obviously correct thing to want, and I think the movement already has a major PR problem among natural allies (see, eg Timnit Gebru’s Twitter) that this kind of thing exacerbates
It’s bad marketing because it’s easily conflated with neglecting people alive today, Pascal’s Mugging, naive utilitarianism, strong longtermism, etc. And I often see people mocking EA or AI Safety who point to the same obvious weakness of “if there’s just a. 0001% chance of it being really bad we should drop everything else to fix it”. I think this is actually a pretty valid argument to mock!
And even for people who avoid that trap, it seems pretty patronising to me to frame “caring about future people” as an important and cruxy moral insight—in practice the distinguishing thing about EA is our empirical beliefs about killer robots!
I am admittedly also biased because I find most moral philosophy debates irritating, and think that EAs as a whole spend far too much time on them rather than actually doing things!
Thanks :) And thanks for your original piece!
There seems to be tension in your comment here. You’re claiming both that longtermism is a niche and specific philosophical position but also that it’s patronising to point out to people.
Perhaps you’re pointing to some hard trade-off? Like, if you make the full argument, it’s paralysing and impractical, but if you just state the headline, it’s obvious? That strikes me as a bit of a double-strawman—you can explain the idea in varying levels of depth depending on the context.
I don’t think longtermism need be understood as a niche and specific philosophical position and discussion about longtermism doesn’t need to engage in complex moral philosophy, but I agree that it’s often framed this way (in the wrong contexts) and that this is bad for the reasons you point to. I think the first chapter of What We Owe the Future gets this balance right, and it’s probably my favourite explanation of longtermism.
I disagree that most people already buy its core claim, which I think is more like “protecting the long-term future of humanity is extremely important and we’re not doing it” and not just “we should care about future people”. I think many people do “care” in the latter way but aren’t sincerely engaging with the implications of that.
I agree with this!
Yeah, fair point, I’m conflating two things here. Firstly, strong longtermism/total utilitarianism, or the slightly weaker form of “the longterm future is overwhelmingly important, and mostly dominates short term considerations”, is what I’m calling the niche position. And “future people matter and we should not only care about people alive today” is the common sense patronising position. These are obviously very different things!
In practice, my perception of EA outreach is that it mostly falls into one of those buckets? But this may be me being uncharitable. WWOTF is definitely more nuanced than this, but I mostly just disagree with its message because I think it significantly underrates AI.
I do think that the position of “the longterm future matters a lot, but not overwhelmingly, but is significantly underrated/under invested in today” is reasonable and correct and falls in neither of those extremes. And I would be pro most of society agreeing with it! I just think that the main way that seems likely to robustly affect the longterm future is x risk reduction, and that the risk is high enough that this straightforwardly makes sense from common sense morality.
All makes sense, I agree it’s usually one of those two things and that the wrong one is sometimes used.
Yeah, I think that last sentence is where we disagree. I think it’s a reasonable view that I’d respond to with something like my “our situation could change” or “our priorities could change”. But I’m glad not everyone is taking the same approach and think we should make both of these (complimentary) cases :)
Thanks for engaging!
I’d say the biggest red flag for moral philosophy is that it still uses intuition both as a hypothesis generator and reliable evidence, when it’s basically worthless for conclusions to accept. Yet that’s the standard moral philosophy is in. It’s akin to the pre-science era of knowledge.
That’s why it’s so irritating.
So I can draw 2 conclusions from that:
Mind independent facts about morality are not real, in the same vein as identity is not real (controversially, consciousness probably is this.)
There is a reality, but moral philosophy needs to be improved.
And I do think it’s valuable for EA to do this, if only to see whether there is a reality at the end of it all.