9. Experts and common sense suggests that it is plausible that the best thing you can do for the long term is to make the short term go well
It is not unusual to hear people say that the best thing you can do for the long term is to make the short term good. This seems a reasonable common sense view.
Even people who are trusted and considered experts with the EA community express this view. For example here Peter Singer suggest that “If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway” (source)
It’s unfortunate that Singer didn’t expand more on this, since we’re left to speculate, and my initial reaction is that this is false, and on a more careful reading, probably misleading.
How is he imagining reducing poverty and increasing education moving things in the right direction? Does it lead to more fairly distributed influence over the future which has good effects, and/or a wider moral circle? Is he talking about compounding wealth/growth? Does it mean more people are likely to contribute to technological solutions? But what about accelerating technological risks?
Does he think “enabling people to escape poverty and get an education” moves things in the right direction as much as almost anything else in expectation, in case we have both likelihood and “distance” to consider?
Maybe “enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do”, but “almost anything else” could leave a relatively small share of interventions that we can reliably identify as doing much better for the long term (and without also picking things that backfire overall).
Is he just expressing skepticism that longtermist interventions actually reliably move things in the right direction at all without backfiring, e.g. due to cluelessness/deep uncertainty? I’m most sympathetic to this, but if this is what he meant, he should have said so.
Also, I don’t think Singer is an expert in longtermist thinking and longtermist interventions, and I have not seen him engage a lot with longtermism. I could be wrong. Of course, that may be because he’s skeptical of longtermism, possibly justifiably so.
I think the weak argument here is not: Singer has thought about this a lot and has an informed view. It is maybe something like: There is an intuition that convergence makes sense, and even smart folk (e.g. Singer) have this intuition, and intuitions are some evidence.
FWIW I don’t think that Peter Singer piece is a great piece.
It’s unfortunate that Singer didn’t expand more on this, since we’re left to speculate, and my initial reaction is that this is false, and on a more careful reading, probably misleading.
How is he imagining reducing poverty and increasing education moving things in the right direction? Does it lead to more fairly distributed influence over the future which has good effects, and/or a wider moral circle? Is he talking about compounding wealth/growth? Does it mean more people are likely to contribute to technological solutions? But what about accelerating technological risks?
Does he think “enabling people to escape poverty and get an education” moves things in the right direction as much as almost anything else in expectation, in case we have both likelihood and “distance” to consider?
Maybe “enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do”, but “almost anything else” could leave a relatively small share of interventions that we can reliably identify as doing much better for the long term (and without also picking things that backfire overall).
Is he just expressing skepticism that longtermist interventions actually reliably move things in the right direction at all without backfiring, e.g. due to cluelessness/deep uncertainty? I’m most sympathetic to this, but if this is what he meant, he should have said so.
Also, I don’t think Singer is an expert in longtermist thinking and longtermist interventions, and I have not seen him engage a lot with longtermism. I could be wrong. Of course, that may be because he’s skeptical of longtermism, possibly justifiably so.
Agree
I think the weak argument here is not: Singer has thought about this a lot and has an informed view. It is maybe something like: There is an intuition that convergence makes sense, and even smart folk (e.g. Singer) have this intuition, and intuitions are some evidence.
FWIW I don’t think that Peter Singer piece is a great piece.