Iām sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say).
I think I essentially agree, and I think that these sorts of points are too often ignored. But I donāt 100% agree. In particular, I wouldnāt be massively surprised if, after a few years of relevant research, we basically concluded that thereās a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskillās suggestion of speeding up progress as a possible longtermist priority.)
Iād bet against that, but not with massive odds. (Itād be better for me to operationalise my claim more and put a number on it, rather than making these vague statementsāIām just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because thatās easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something thatās not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if thatās our ultimate, terminal goal. Or we might even most of the time focus on an even more āproximateā or āmerely instrumentalā proxy, like āimproving institutionsā ability and motivation to respond effectively to [x]ā, again as if thatās a terminal goal.
(I mean this to stand in contrast to consciously focusing on āimproving the long-term future as much as possibleā, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually whatās best for the future.
I think this approach makes sense, though itās also good to remain aware of whatās a proxy and whatās an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)
I think I essentially agree, and I think that these sorts of points are too often ignored. But I donāt 100% agree. In particular, I wouldnāt be massively surprised if, after a few years of relevant research, we basically concluded that thereās a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskillās suggestion of speeding up progress as a possible longtermist priority.)
Iād bet against that, but not with massive odds. (Itād be better for me to operationalise my claim more and put a number on it, rather than making these vague statementsāIām just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because thatās easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something thatās not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if thatās our ultimate, terminal goal. Or we might even most of the time focus on an even more āproximateā or āmerely instrumentalā proxy, like āimproving institutionsā ability and motivation to respond effectively to [x]ā, again as if thatās a terminal goal.
(I mean this to stand in contrast to consciously focusing on āimproving the long-term future as much as possibleā, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually whatās best for the future.
I think this approach makes sense, though itās also good to remain aware of whatās a proxy and whatās an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)