Thanks for the comment! I fully agree with your points.
People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.
That’s a good point. A key question is how fine-grained our influence over the long-term future is—that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too “chaotic” for more specific attempts. (However, overall I think it’s unclear if / to what extent that is true.)
Thanks for the comment! I fully agree with your points.
That’s a good point. A key question is how fine-grained our influence over the long-term future is—that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too “chaotic” for more specific attempts. (However, overall I think it’s unclear if / to what extent that is true.)