Newberry and Ord’s paper on moral parliamentarianism, originally proposed by Bostrom, seems like a reasonable way to arrive there. (Which seems almost ironic, given that they are key proponents of strong longtermism.)
I don’t think I’m a proponent of strong longtermism at all — at least not on the definition given in the earlier draft of Will and Hilary’s paper on the topic that got a lot of attention here a while back and which is what most people will associate with the name. I am happy to call myself a longtermist, though that also doesn’t have an agreed definition at the moment.
Here is how I put it in The Precipice:
Considerations like these suggest an ethic we might call longtermism, which is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity’s potential is one avenue for such a lasting impact and there may be others too.
My preferred use of the term is akin to being an environmentalist: it doesn’t mean that the only thing that matters is the environment, just that it is a core part of what you care about and informs a lot of your thinking.
I’m also not defending or promoting strong longtermism in my next book. I defend (non-strong) longtermism, and the definition I use is: “longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time.” I agree with Toby on the analogy to environmentalism.
(The definition I use of strong longtermism is that it’s the view that positively influencing the longterm future is the moral priority of our time.)
Thanks Will—I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify.
I’m especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the “mainstream EA view,” albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.
It would indeed be ironic—the fact that Toby and Will are major proponents of moral uncertainty seems like more evidence in favour of the view in my top level comment.
I don’t think it’s necessarily clear that incorporating moral uncertainty means you have to support hedging across different plausible views. If one maximises expected choiceworthiness (MEC) for example one can be fanatically driven by a single view that posits an extreme payoff (e.g. strong longtermism!).
Indeed MacAskill and Greaves have argued that strong longtermism seems robust to variations in population axiology and decision theory whilst Ord has argued reducing x-risk is robust to normative variations (deontology, virtue ethics, consequentialism). If an action is robust to axiological variations this can also help it dominate other actions, even under moral uncertainty.
Reading the abstract of the moral parliamentarianism paper, it isn’t clear to me that he is actually a proponent of that approach, just that he has a view on the best specific approach within moral parliamentarianism.
As I say in my comment to Ben, I think an MEC approach to moral uncertainty can lead to being quite fanatical in favour of longtermism.
Newberry and Ord’s paper on moral parliamentarianism, originally proposed by Bostrom, seems like a reasonable way to arrive there. (Which seems almost ironic, given that they are key proponents of strong longtermism.)
I don’t think I’m a proponent of strong longtermism at all — at least not on the definition given in the earlier draft of Will and Hilary’s paper on the topic that got a lot of attention here a while back and which is what most people will associate with the name. I am happy to call myself a longtermist, though that also doesn’t have an agreed definition at the moment.
Here is how I put it in The Precipice:
My preferred use of the term is akin to being an environmentalist: it doesn’t mean that the only thing that matters is the environment, just that it is a core part of what you care about and informs a lot of your thinking.
I’m also not defending or promoting strong longtermism in my next book. I defend (non-strong) longtermism, and the definition I use is: “longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time.” I agree with Toby on the analogy to environmentalism.
(The definition I use of strong longtermism is that it’s the view that positively influencing the longterm future is the moral priority of our time.)
Thanks Will—I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify.
I’m especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the “mainstream EA view,” albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.
That’s super helpful to see clarified, and I will edit the post to reflect that—thanks!
It would indeed be ironic—the fact that Toby and Will are major proponents of moral uncertainty seems like more evidence in favour of the view in my top level comment.
I don’t think it’s necessarily clear that incorporating moral uncertainty means you have to support hedging across different plausible views. If one maximises expected choiceworthiness (MEC) for example one can be fanatically driven by a single view that posits an extreme payoff (e.g. strong longtermism!).
Indeed MacAskill and Greaves have argued that strong longtermism seems robust to variations in population axiology and decision theory whilst Ord has argued reducing x-risk is robust to normative variations (deontology, virtue ethics, consequentialism). If an action is robust to axiological variations this can also help it dominate other actions, even under moral uncertainty.
I think Ord’s favoured approach to moral uncertainty is maximising expected choice-worthiness (MEC) which he argues for with Will MacAskill.
Reading the abstract of the moral parliamentarianism paper, it isn’t clear to me that he is actually a proponent of that approach, just that he has a view on the best specific approach within moral parliamentarianism.
As I say in my comment to Ben, I think an MEC approach to moral uncertainty can lead to being quite fanatical in favour of longtermism.