I’ll respond quickly because I’m pressed with time.
I don’t think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
I don’t understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I don’t see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term ‘welfarism’, but I don’t think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret ‘others’ in “helping others effectively” as “whichever beings count morally”. Relatedly, uncertainty about what counts as a person’s wellbeing is also relevant, at least if we interpret ‘helping’ in “helping others effectively” as “improving their wellbeing”. So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I don’t see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the “core” of EA, as I see it, also doesn’t include anti-ageing work, and maybe doesn’t include a concern for suffering subroutines, but the Wiki covers those things and I think that it’s good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on “helping others”. But my basic point here is that the core of EA isn’t the whole of EA and isn’t all that EA Wiki should cover.)
Going back to “the EA Wiki should focus solely on considerations relevant from an EA perspective”, I think that that’s a good principle but that those considerations aren’t limited to “the core of EA”.
---
My understanding of EA, captured in the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view.
Was the word “not” meant to be in there? Or did you mean to say the opposite?
If the “not” is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe you’re just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ord’s book? (Btw, I do now recognise that I made a mistake in my previous comment—I wrote as if “helping others” meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we aren’t in a position to rule out perspectives that don’t even focus on “helping others”, including virtue-ethical perspectives or cosmic significance perspectives.
I don’t think I’d want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the “advocacy”-focused part of my previous comment.
But I think the “understanding these other actors” part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with “consequentialism (utilitarian)”, but 30% didn’t, including some people identifying with virtue ethics or deontology. But I’m not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount I’ve written gave a vibe of me trying to batter you into giving up or something—it’s more just that it’d take me longer to be concise.)
Thanks for the reply. There are a bunch of interesting questions I’d like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesn’t turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I don’t have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably won’t be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, I’ve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. I’m still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
I’ll respond quickly because I’m pressed with time.
I don’t think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
I don’t understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I don’t see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term ‘welfarism’, but I don’t think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret ‘others’ in “helping others effectively” as “whichever beings count morally”. Relatedly, uncertainty about what counts as a person’s wellbeing is also relevant, at least if we interpret ‘helping’ in “helping others effectively” as “improving their wellbeing”. So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I don’t see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the “core” of EA, as I see it, also doesn’t include anti-ageing work, and maybe doesn’t include a concern for suffering subroutines, but the Wiki covers those things and I think that it’s good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on “helping others”. But my basic point here is that the core of EA isn’t the whole of EA and isn’t all that EA Wiki should cover.)
Going back to “the EA Wiki should focus solely on considerations relevant from an EA perspective”, I think that that’s a good principle but that those considerations aren’t limited to “the core of EA”.
---
Was the word “not” meant to be in there? Or did you mean to say the opposite?
If the “not” is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe you’re just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ord’s book? (Btw, I do now recognise that I made a mistake in my previous comment—I wrote as if “helping others” meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we aren’t in a position to rule out perspectives that don’t even focus on “helping others”, including virtue-ethical perspectives or cosmic significance perspectives.
I don’t think I’d want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the “advocacy”-focused part of my previous comment.
But I think the “understanding these other actors” part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with “consequentialism (utilitarian)”, but 30% didn’t, including some people identifying with virtue ethics or deontology. But I’m not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount I’ve written gave a vibe of me trying to batter you into giving up or something—it’s more just that it’d take me longer to be concise.)
(Typing from my phone; apologies for any typos.)
Thanks for the reply. There are a bunch of interesting questions I’d like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesn’t turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I don’t have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably won’t be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, I’ve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. I’m still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Great, I like the name.