I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the ācoreā of EA, as I see it, also doesnāt include anti-ageing work, and maybe doesnāt include a concern for suffering subroutines, but the Wiki covers those things and I think that itās good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on āhelping othersā. But my basic point here is that the core of EA isnāt the whole of EA and isnāt all that EA Wiki should cover.)
Going back to āthe EA Wiki should focus solely on considerations relevant from an EA perspectiveā, I think that thatās a good principle but that those considerations arenāt limited to āthe core of EAā.
---
My understanding of EA, captured in the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view.
Was the word ānotā meant to be in there? Or did you mean to say the opposite?
If the ānotā is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe youāre just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ordās book? (Btw, I do now recognise that I made a mistake in my previous commentāI wrote as if āhelping othersā meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we arenāt in a position to rule out perspectives that donāt even focus on āhelping othersā, including virtue-ethical perspectives or cosmic significance perspectives.
I donāt think Iād want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the āadvocacyā-focused part of my previous comment.
But I think the āunderstanding these other actorsā part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with āconsequentialism (utilitarian)ā, but 30% didnāt, including some people identifying with virtue ethics or deontology. But Iām not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount Iāve written gave a vibe of me trying to batter you into giving up or somethingāitās more just that itād take me longer to be concise.)
Thanks for the reply. There are a bunch of interesting questions Iād like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesnāt turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I donāt have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably wonāt be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, Iāve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. Iām still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the ācoreā of EA, as I see it, also doesnāt include anti-ageing work, and maybe doesnāt include a concern for suffering subroutines, but the Wiki covers those things and I think that itās good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on āhelping othersā. But my basic point here is that the core of EA isnāt the whole of EA and isnāt all that EA Wiki should cover.)
Going back to āthe EA Wiki should focus solely on considerations relevant from an EA perspectiveā, I think that thatās a good principle but that those considerations arenāt limited to āthe core of EAā.
---
Was the word ānotā meant to be in there? Or did you mean to say the opposite?
If the ānotā is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe youāre just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ordās book? (Btw, I do now recognise that I made a mistake in my previous commentāI wrote as if āhelping othersā meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we arenāt in a position to rule out perspectives that donāt even focus on āhelping othersā, including virtue-ethical perspectives or cosmic significance perspectives.
I donāt think Iād want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the āadvocacyā-focused part of my previous comment.
But I think the āunderstanding these other actorsā part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with āconsequentialism (utilitarian)ā, but 30% didnāt, including some people identifying with virtue ethics or deontology. But Iām not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount Iāve written gave a vibe of me trying to batter you into giving up or somethingāitās more just that itād take me longer to be concise.)
(Typing from my phone; apologies for any typos.)
Thanks for the reply. There are a bunch of interesting questions Iād like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesnāt turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I donāt have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably wonāt be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, Iāve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. Iām still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Great, I like the name.