Yeah, fair point, I’m conflating two things here. Firstly, strong longtermism/total utilitarianism, or the slightly weaker form of “the longterm future is overwhelmingly important, and mostly dominates short term considerations”, is what I’m calling the niche position. And “future people matter and we should not only care about people alive today” is the common sense patronising position. These are obviously very different things!
In practice, my perception of EA outreach is that it mostly falls into one of those buckets? But this may be me being uncharitable. WWOTF is definitely more nuanced than this, but I mostly just disagree with its message because I think it significantly underrates AI.
I do think that the position of “the longterm future matters a lot, but not overwhelmingly, but is significantly underrated/under invested in today” is reasonable and correct and falls in neither of those extremes. And I would be pro most of society agreeing with it! I just think that the main way that seems likely to robustly affect the longterm future is x risk reduction, and that the risk is high enough that this straightforwardly makes sense from common sense morality.
All makes sense, I agree it’s usually one of those two things and that the wrong one is sometimes used.
Yeah, I think that last sentence is where we disagree. I think it’s a reasonable view that I’d respond to with something like my “our situation could change” or “our priorities could change”. But I’m glad not everyone is taking the same approach and think we should make both of these (complimentary) cases :)
Yeah, fair point, I’m conflating two things here. Firstly, strong longtermism/total utilitarianism, or the slightly weaker form of “the longterm future is overwhelmingly important, and mostly dominates short term considerations”, is what I’m calling the niche position. And “future people matter and we should not only care about people alive today” is the common sense patronising position. These are obviously very different things!
In practice, my perception of EA outreach is that it mostly falls into one of those buckets? But this may be me being uncharitable. WWOTF is definitely more nuanced than this, but I mostly just disagree with its message because I think it significantly underrates AI.
I do think that the position of “the longterm future matters a lot, but not overwhelmingly, but is significantly underrated/under invested in today” is reasonable and correct and falls in neither of those extremes. And I would be pro most of society agreeing with it! I just think that the main way that seems likely to robustly affect the longterm future is x risk reduction, and that the risk is high enough that this straightforwardly makes sense from common sense morality.
All makes sense, I agree it’s usually one of those two things and that the wrong one is sometimes used.
Yeah, I think that last sentence is where we disagree. I think it’s a reasonable view that I’d respond to with something like my “our situation could change” or “our priorities could change”. But I’m glad not everyone is taking the same approach and think we should make both of these (complimentary) cases :)
Thanks for engaging!