I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say).
I think I essentially agree, and I think that these sorts of points are too often ignored. But I don’t 100% agree. In particular, I wouldn’t be massively surprised if, after a few years of relevant research, we basically concluded that there’s a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskill’s suggestion of speeding up progress as a possible longtermist priority.)
I’d bet against that, but not with massive odds. (It’d be better for me to operationalise my claim more and put a number on it, rather than making these vague statements—I’m just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because that’s easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something that’s not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if that’s our ultimate, terminal goal. Or we might even most of the time focus on an even more “proximate” or “merely instrumental” proxy, like “improving institutions’ ability and motivation to respond effectively to [x]”, again as if that’s a terminal goal.
(I mean this to stand in contrast to consciously focusing on “improving the long-term future as much as possible”, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually what’s best for the future.
I think this approach makes sense, though it’s also good to remain aware of what’s a proxy and what’s an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)
I think I essentially agree, and I think that these sorts of points are too often ignored. But I don’t 100% agree. In particular, I wouldn’t be massively surprised if, after a few years of relevant research, we basically concluded that there’s a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskill’s suggestion of speeding up progress as a possible longtermist priority.)
I’d bet against that, but not with massive odds. (It’d be better for me to operationalise my claim more and put a number on it, rather than making these vague statements—I’m just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because that’s easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something that’s not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if that’s our ultimate, terminal goal. Or we might even most of the time focus on an even more “proximate” or “merely instrumental” proxy, like “improving institutions’ ability and motivation to respond effectively to [x]”, again as if that’s a terminal goal.
(I mean this to stand in contrast to consciously focusing on “improving the long-term future as much as possible”, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually what’s best for the future.
I think this approach makes sense, though it’s also good to remain aware of what’s a proxy and what’s an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)