Hmm. I think I agree with the principle that āthe EA Wiki should focus solely on considerations relevant from an EA perspectiveā, but have a broader notion of what considerations are relevant from an EA perspective. (It also seems to me that the Wiki is already operating with a broader notion of that than you seem to be suggesting, given that e.g. we have an entry for deontology.)
I think the three core reasons I have this view are:
effective altruism is actually a big fuzzy bundle of a bunch of overlapping things
we should be morally uncertain
in order to do good from āan EA perspectiveā, itās in practice often very useful to understand different perspectives other people hold and communicate with those people in terms of those perspectives
On 1 and 2:
I think āEffective altruism is focused on finding the best ways to benefit others (understood as moral patients)ā is an overly strong statement.
Effective altruism could be understood as a community of people or as a set of ideas, and either way there are many different ways one could reasonably draw the boundaries.
One definition that seems good to me is this one from MacAskill (2019):
āEffective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding āthe goodā in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. [...]
The definition is: [...] Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyoneās wellbeing counting equally.ā (emphasis added, and formatting tweaked)
I think we should be quite morally uncertain.
And many seemingly smart and well-informed people have given non-welfarist or even non-consequentialist perspectives a lot of weight (see e.g. the PhilPapers survey).
And I myself see some force in arguments or intuitions for non-welfarist or even non-consequentialist perspectives.
So I think we should see at least consideration of non-welfarist and non-consequentialist perspectives as something that could make sense as part of the project to āuse evidence and reason to do the most good possibleā.
Empirically, I think the above views are shared by many other people in EA
Including two of the main founders of the movement
MacAskill wrote a thesis and book on moral uncertainty (though I donāt know his precise stance on giving weight to non-consequentialist views)
Ord included discussion of the previously mentioned 5 perspectives in his book, and has given indication that he genuinely sees some force in the ones other than present and future
These views also seem in line with the ālong reflectionā idea that both of those people see as quite important
For long-reflection-related reasons, Iād actually be quite concerned about the idea that we should, at this stage of (in my view) massive ignorance, totally confidently commit to the ideas of consequentialism and welfarism
Though one could support the idea of the long reflection while being certain about consequentialism and welfarism
Also, Beckstead seemed open to non-consequentialism in a recent talk at the SERI conference
Relatedly, I think many effective altruists put nontrivial weight on the idea that they should abide by certain deontological constraints/āduties, and not simply because that might be a good decision procedure for implementing utilitarianism in practice
Maybe the same is true in relation to virtue ethics, but Iām not sure
I think the same is at least somewhat true with regards to the āpastā-focused moral foundation Ord mentions
I find that framing emotionally resonant, but I donāt give it much weight
Jaan Tallinn seemed to indicate putting some weight on that framing in a recent FLI podcast episode (search āancestors)
On 3:
EA represents/āhas a tiny minority of all the people, money, political power, etc. in the world
The other people can block our actions, counter their effects, provide us support, become inspired to join us, etc.
How much of each of those things happen will have a huge influence on the amount of good weāre ultimately able to do
One implication is that what other people are thinking and why is very decision-relevant for us
Just as many other features of the world that doesnāt adopt an EA mindset (e.g., the European Union) could still be decision-relevant enough to warrant an entry
One could see speciesism as a more extreme version of this; thatās of course not in line with an impartial welfarist mindset, but impartial welfarists may be more effective if they know about speciesism
Another implication is that being able to talk to people in ways that connect to their own values, epistemologies, etc. (or show them resources that do this, e.g. parts of the Precipice) can be very valuable for advocacy purposes
Iāll respond quickly because Iām pressed with time.
I donāt think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
I donāt understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I donāt see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term āwelfarismā, but I donāt think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret āothersā in āhelping others effectivelyā as āwhichever beings count morallyā. Relatedly, uncertainty about what counts as a personās wellbeing is also relevant, at least if we interpret āhelpingā in āhelping others effectivelyā as āimproving their wellbeingā. So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I donāt see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the ācoreā of EA, as I see it, also doesnāt include anti-ageing work, and maybe doesnāt include a concern for suffering subroutines, but the Wiki covers those things and I think that itās good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on āhelping othersā. But my basic point here is that the core of EA isnāt the whole of EA and isnāt all that EA Wiki should cover.)
Going back to āthe EA Wiki should focus solely on considerations relevant from an EA perspectiveā, I think that thatās a good principle but that those considerations arenāt limited to āthe core of EAā.
---
My understanding of EA, captured in the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view.
Was the word ānotā meant to be in there? Or did you mean to say the opposite?
If the ānotā is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe youāre just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ordās book? (Btw, I do now recognise that I made a mistake in my previous commentāI wrote as if āhelping othersā meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we arenāt in a position to rule out perspectives that donāt even focus on āhelping othersā, including virtue-ethical perspectives or cosmic significance perspectives.
I donāt think Iād want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the āadvocacyā-focused part of my previous comment.
But I think the āunderstanding these other actorsā part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with āconsequentialism (utilitarian)ā, but 30% didnāt, including some people identifying with virtue ethics or deontology. But Iām not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount Iāve written gave a vibe of me trying to batter you into giving up or somethingāitās more just that itād take me longer to be concise.)
Thanks for the reply. There are a bunch of interesting questions Iād like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesnāt turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I donāt have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably wonāt be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, Iāve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. Iām still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Hmm. I think I agree with the principle that āthe EA Wiki should focus solely on considerations relevant from an EA perspectiveā, but have a broader notion of what considerations are relevant from an EA perspective. (It also seems to me that the Wiki is already operating with a broader notion of that than you seem to be suggesting, given that e.g. we have an entry for deontology.)
I think the three core reasons I have this view are:
effective altruism is actually a big fuzzy bundle of a bunch of overlapping things
we should be morally uncertain
in order to do good from āan EA perspectiveā, itās in practice often very useful to understand different perspectives other people hold and communicate with those people in terms of those perspectives
On 1 and 2:
I think āEffective altruism is focused on finding the best ways to benefit others (understood as moral patients)ā is an overly strong statement.
Effective altruism could be understood as a community of people or as a set of ideas, and either way there are many different ways one could reasonably draw the boundaries.
One definition that seems good to me is this one from MacAskill (2019):
āEffective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding āthe goodā in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. [...]
The definition is: [...] Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyoneās wellbeing counting equally.ā (emphasis added, and formatting tweaked)
I think we should be quite morally uncertain.
And many seemingly smart and well-informed people have given non-welfarist or even non-consequentialist perspectives a lot of weight (see e.g. the PhilPapers survey).
And I myself see some force in arguments or intuitions for non-welfarist or even non-consequentialist perspectives.
So I think we should see at least consideration of non-welfarist and non-consequentialist perspectives as something that could make sense as part of the project to āuse evidence and reason to do the most good possibleā.
Empirically, I think the above views are shared by many other people in EA
Including two of the main founders of the movement
MacAskill wrote a thesis and book on moral uncertainty (though I donāt know his precise stance on giving weight to non-consequentialist views)
Ord included discussion of the previously mentioned 5 perspectives in his book, and has given indication that he genuinely sees some force in the ones other than present and future
These views also seem in line with the ālong reflectionā idea that both of those people see as quite important
For long-reflection-related reasons, Iād actually be quite concerned about the idea that we should, at this stage of (in my view) massive ignorance, totally confidently commit to the ideas of consequentialism and welfarism
Though one could support the idea of the long reflection while being certain about consequentialism and welfarism
Also, Beckstead seemed open to non-consequentialism in a recent talk at the SERI conference
Relatedly, I think many effective altruists put nontrivial weight on the idea that they should abide by certain deontological constraints/āduties, and not simply because that might be a good decision procedure for implementing utilitarianism in practice
Maybe the same is true in relation to virtue ethics, but Iām not sure
I think the same is at least somewhat true with regards to the āpastā-focused moral foundation Ord mentions
I find that framing emotionally resonant, but I donāt give it much weight
Jaan Tallinn seemed to indicate putting some weight on that framing in a recent FLI podcast episode (search āancestors)
On 3:
EA represents/āhas a tiny minority of all the people, money, political power, etc. in the world
The other people can block our actions, counter their effects, provide us support, become inspired to join us, etc.
How much of each of those things happen will have a huge influence on the amount of good weāre ultimately able to do
One implication is that what other people are thinking and why is very decision-relevant for us
Just as many other features of the world that doesnāt adopt an EA mindset (e.g., the European Union) could still be decision-relevant enough to warrant an entry
One could see speciesism as a more extreme version of this; thatās of course not in line with an impartial welfarist mindset, but impartial welfarists may be more effective if they know about speciesism
Another implication is that being able to talk to people in ways that connect to their own values, epistemologies, etc. (or show them resources that do this, e.g. parts of the Precipice) can be very valuable for advocacy purposes
Iāll respond quickly because Iām pressed with time.
I donāt think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
I donāt understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I donāt see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term āwelfarismā, but I donāt think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret āothersā in āhelping others effectivelyā as āwhichever beings count morallyā. Relatedly, uncertainty about what counts as a personās wellbeing is also relevant, at least if we interpret āhelpingā in āhelping others effectivelyā as āimproving their wellbeingā. So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I donāt see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the ācoreā of EA, as I see it, also doesnāt include anti-ageing work, and maybe doesnāt include a concern for suffering subroutines, but the Wiki covers those things and I think that itās good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on āhelping othersā. But my basic point here is that the core of EA isnāt the whole of EA and isnāt all that EA Wiki should cover.)
Going back to āthe EA Wiki should focus solely on considerations relevant from an EA perspectiveā, I think that thatās a good principle but that those considerations arenāt limited to āthe core of EAā.
---
Was the word ānotā meant to be in there? Or did you mean to say the opposite?
If the ānotā is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe youāre just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ordās book? (Btw, I do now recognise that I made a mistake in my previous commentāI wrote as if āhelping othersā meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we arenāt in a position to rule out perspectives that donāt even focus on āhelping othersā, including virtue-ethical perspectives or cosmic significance perspectives.
I donāt think Iād want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the āadvocacyā-focused part of my previous comment.
But I think the āunderstanding these other actorsā part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with āconsequentialism (utilitarian)ā, but 30% didnāt, including some people identifying with virtue ethics or deontology. But Iām not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount Iāve written gave a vibe of me trying to batter you into giving up or somethingāitās more just that itād take me longer to be concise.)
(Typing from my phone; apologies for any typos.)
Thanks for the reply. There are a bunch of interesting questions Iād like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesnāt turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I donāt have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably wonāt be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, Iāve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. Iām still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Great, I like the name.