I think this post makes important points, and makes them well. If I were to distill my own thoughts on this sort of topic into just two key points, it’d be:
People with and without suffering-focused ethics will agree on many aspects of how the long-term future should be. In particular, this is because many existential catastrophes will also be suffering catastrophes, and vice versa. (See also Venn diagrams of existential, global, and suffering catastrophes.)
E.g., “a permanent lock-in of a totalitarian power structure” sounds awful to pretty much everyone.
People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways which the other group values.
E.g., improving our values and political system seems like it could both (a) reduce extinction risks and (b) reduce the expected amount of suffering in futures that are overall good from a non-suffering-focused perspective.
(Also, btw, questions and links to resources relevant to many of the topics you mentioned can be found in my recent post Crucial questions for longtermists.)
Thanks for the comment! I fully agree with your points.
People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.
That’s a good point. A key question is how fine-grained our influence over the long-term future is—that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too “chaotic” for more specific attempts. (However, overall I think it’s unclear if / to what extent that is true.)
I think this post makes important points, and makes them well. If I were to distill my own thoughts on this sort of topic into just two key points, it’d be:
People with and without suffering-focused ethics will agree on many aspects of how the long-term future should be. In particular, this is because many existential catastrophes will also be suffering catastrophes, and vice versa. (See also Venn diagrams of existential, global, and suffering catastrophes.)
E.g., “a permanent lock-in of a totalitarian power structure” sounds awful to pretty much everyone.
People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways which the other group values.
E.g., improving our values and political system seems like it could both (a) reduce extinction risks and (b) reduce the expected amount of suffering in futures that are overall good from a non-suffering-focused perspective.
(Also, btw, questions and links to resources relevant to many of the topics you mentioned can be found in my recent post Crucial questions for longtermists.)
Thanks for the comment! I fully agree with your points.
That’s a good point. A key question is how fine-grained our influence over the long-term future is—that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too “chaotic” for more specific attempts. (However, overall I think it’s unclear if / to what extent that is true.)