Hmm, I donât really like âshort-termistâ (or ânear-termistâ), since that only seems to cover what Ord calls the âpresentâ-focused âmoral foundationâ for focusing on x-risks, rather than also the past, civilizational virtue, or cosmic significance perspectives.
Relatedly, âshort-termistâ seems like it implies weâre still assuming a broadly utilitarianian-ish perspective but just not being longtermist, whereas I think itâd be good if these tags could cover more deontological and virtue-focused perspectives. (You could have deontological and virtue-focused perspectives that prioritise x-risk in a way that ultimately comes down to effects on the near-term, but not all such perspectives would be like that.)
Some more ideas:
Existential risk prioritization for non-longtermists
Alternative perspectives on existential risk prioritization
I donât really like tag names that say âalternativeâ in a way that just assumes everyone will know what theyâre alternative to, but Iâm throwing the idea out there anyway, and we do have some other tags with names like that
The reasons for caring about x-risk that Toby mentions are relevant from many moral perspectives, but I think we shouldnât cover them on the EA Wiki, which should be focused on reasons that are relevant from an EA perspective. Effective altruism is focused on finding the best ways to benefit others (understood as moral patients), and by âshort-termistâ I mean views that restrict the class of âothersâ to moral patients currently alive, or whose lives wonât be in the distant future. So I think short-termist + long-termist arguments exhaust the arguments relevant from an EA perspective, and therefore think that all the arguments we should cover in an article about non-longtermist arguments are short-termist arguments.
Itâs not immediately obvious that the EA Wiki should focus solely on considerations relevant from an EA perspective. But after thinking about this for quite some time, I think thatâs the approach we should take, in part because providing a distillation of those considerations is one of the ways in which the EA Wiki could provide value relative to other reference works, especially on topics that already receive at least some attention in non-EA circles.
Hmm. I think I agree with the principle that âthe EA Wiki should focus solely on considerations relevant from an EA perspectiveâ, but have a broader notion of what considerations are relevant from an EA perspective. (It also seems to me that the Wiki is already operating with a broader notion of that than you seem to be suggesting, given that e.g. we have an entry for deontology.)
I think the three core reasons I have this view are:
effective altruism is actually a big fuzzy bundle of a bunch of overlapping things
we should be morally uncertain
in order to do good from âan EA perspectiveâ, itâs in practice often very useful to understand different perspectives other people hold and communicate with those people in terms of those perspectives
On 1 and 2:
I think âEffective altruism is focused on finding the best ways to benefit others (understood as moral patients)â is an overly strong statement.
Effective altruism could be understood as a community of people or as a set of ideas, and either way there are many different ways one could reasonably draw the boundaries.
One definition that seems good to me is this one from MacAskill (2019):
âEffective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding âthe goodâ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. [...]
The definition is: [...] Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyoneâs wellbeing counting equally.â (emphasis added, and formatting tweaked)
I think we should be quite morally uncertain.
And many seemingly smart and well-informed people have given non-welfarist or even non-consequentialist perspectives a lot of weight (see e.g. the PhilPapers survey).
And I myself see some force in arguments or intuitions for non-welfarist or even non-consequentialist perspectives.
So I think we should see at least consideration of non-welfarist and non-consequentialist perspectives as something that could make sense as part of the project to âuse evidence and reason to do the most good possibleâ.
Empirically, I think the above views are shared by many other people in EA
Including two of the main founders of the movement
MacAskill wrote a thesis and book on moral uncertainty (though I donât know his precise stance on giving weight to non-consequentialist views)
Ord included discussion of the previously mentioned 5 perspectives in his book, and has given indication that he genuinely sees some force in the ones other than present and future
These views also seem in line with the âlong reflectionâ idea that both of those people see as quite important
For long-reflection-related reasons, Iâd actually be quite concerned about the idea that we should, at this stage of (in my view) massive ignorance, totally confidently commit to the ideas of consequentialism and welfarism
Though one could support the idea of the long reflection while being certain about consequentialism and welfarism
Also, Beckstead seemed open to non-consequentialism in a recent talk at the SERI conference
Relatedly, I think many effective altruists put nontrivial weight on the idea that they should abide by certain deontological constraints/âduties, and not simply because that might be a good decision procedure for implementing utilitarianism in practice
Maybe the same is true in relation to virtue ethics, but Iâm not sure
I think the same is at least somewhat true with regards to the âpastâ-focused moral foundation Ord mentions
I find that framing emotionally resonant, but I donât give it much weight
Jaan Tallinn seemed to indicate putting some weight on that framing in a recent FLI podcast episode (search âancestors)
On 3:
EA represents/âhas a tiny minority of all the people, money, political power, etc. in the world
The other people can block our actions, counter their effects, provide us support, become inspired to join us, etc.
How much of each of those things happen will have a huge influence on the amount of good weâre ultimately able to do
One implication is that what other people are thinking and why is very decision-relevant for us
Just as many other features of the world that doesnât adopt an EA mindset (e.g., the European Union) could still be decision-relevant enough to warrant an entry
One could see speciesism as a more extreme version of this; thatâs of course not in line with an impartial welfarist mindset, but impartial welfarists may be more effective if they know about speciesism
Another implication is that being able to talk to people in ways that connect to their own values, epistemologies, etc. (or show them resources that do this, e.g. parts of the Precipice) can be very valuable for advocacy purposes
Iâll respond quickly because Iâm pressed with time.
I donât think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
I donât understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I donât see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term âwelfarismâ, but I donât think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret âothersâ in âhelping others effectivelyâ as âwhichever beings count morallyâ. Relatedly, uncertainty about what counts as a personâs wellbeing is also relevant, at least if we interpret âhelpingâ in âhelping others effectivelyâ as âimproving their wellbeingâ. So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I donât see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the âcoreâ of EA, as I see it, also doesnât include anti-ageing work, and maybe doesnât include a concern for suffering subroutines, but the Wiki covers those things and I think that itâs good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on âhelping othersâ. But my basic point here is that the core of EA isnât the whole of EA and isnât all that EA Wiki should cover.)
Going back to âthe EA Wiki should focus solely on considerations relevant from an EA perspectiveâ, I think that thatâs a good principle but that those considerations arenât limited to âthe core of EAâ.
---
My understanding of EA, captured in the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view.
Was the word ânotâ meant to be in there? Or did you mean to say the opposite?
If the ânotâ is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe youâre just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ordâs book? (Btw, I do now recognise that I made a mistake in my previous commentâI wrote as if âhelping othersâ meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we arenât in a position to rule out perspectives that donât even focus on âhelping othersâ, including virtue-ethical perspectives or cosmic significance perspectives.
I donât think Iâd want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the âadvocacyâ-focused part of my previous comment.
But I think the âunderstanding these other actorsâ part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with âconsequentialism (utilitarian)â, but 30% didnât, including some people identifying with virtue ethics or deontology. But Iâm not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount Iâve written gave a vibe of me trying to batter you into giving up or somethingâitâs more just that itâd take me longer to be concise.)
Thanks for the reply. There are a bunch of interesting questions Iâd like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesnât turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I donât have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably wonât be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, Iâve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. Iâm still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Hmm, I donât really like âshort-termistâ (or ânear-termistâ), since that only seems to cover what Ord calls the âpresentâ-focused âmoral foundationâ for focusing on x-risks, rather than also the past, civilizational virtue, or cosmic significance perspectives.
Relatedly, âshort-termistâ seems like it implies weâre still assuming a broadly utilitarianian-ish perspective but just not being longtermist, whereas I think itâd be good if these tags could cover more deontological and virtue-focused perspectives. (You could have deontological and virtue-focused perspectives that prioritise x-risk in a way that ultimately comes down to effects on the near-term, but not all such perspectives would be like that.)
Some more ideas:
Existential risk prioritization for non-longtermists
Alternative perspectives on existential risk prioritization
I donât really like tag names that say âalternativeâ in a way that just assumes everyone will know what theyâre alternative to, but Iâm throwing the idea out there anyway, and we do have some other tags with names like that
The reasons for caring about x-risk that Toby mentions are relevant from many moral perspectives, but I think we shouldnât cover them on the EA Wiki, which should be focused on reasons that are relevant from an EA perspective. Effective altruism is focused on finding the best ways to benefit others (understood as moral patients), and by âshort-termistâ I mean views that restrict the class of âothersâ to moral patients currently alive, or whose lives wonât be in the distant future. So I think short-termist + long-termist arguments exhaust the arguments relevant from an EA perspective, and therefore think that all the arguments we should cover in an article about non-longtermist arguments are short-termist arguments.
Itâs not immediately obvious that the EA Wiki should focus solely on considerations relevant from an EA perspective. But after thinking about this for quite some time, I think thatâs the approach we should take, in part because providing a distillation of those considerations is one of the ways in which the EA Wiki could provide value relative to other reference works, especially on topics that already receive at least some attention in non-EA circles.
Hmm. I think I agree with the principle that âthe EA Wiki should focus solely on considerations relevant from an EA perspectiveâ, but have a broader notion of what considerations are relevant from an EA perspective. (It also seems to me that the Wiki is already operating with a broader notion of that than you seem to be suggesting, given that e.g. we have an entry for deontology.)
I think the three core reasons I have this view are:
effective altruism is actually a big fuzzy bundle of a bunch of overlapping things
we should be morally uncertain
in order to do good from âan EA perspectiveâ, itâs in practice often very useful to understand different perspectives other people hold and communicate with those people in terms of those perspectives
On 1 and 2:
I think âEffective altruism is focused on finding the best ways to benefit others (understood as moral patients)â is an overly strong statement.
Effective altruism could be understood as a community of people or as a set of ideas, and either way there are many different ways one could reasonably draw the boundaries.
One definition that seems good to me is this one from MacAskill (2019):
âEffective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding âthe goodâ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. [...]
The definition is: [...] Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyoneâs wellbeing counting equally.â (emphasis added, and formatting tweaked)
I think we should be quite morally uncertain.
And many seemingly smart and well-informed people have given non-welfarist or even non-consequentialist perspectives a lot of weight (see e.g. the PhilPapers survey).
And I myself see some force in arguments or intuitions for non-welfarist or even non-consequentialist perspectives.
So I think we should see at least consideration of non-welfarist and non-consequentialist perspectives as something that could make sense as part of the project to âuse evidence and reason to do the most good possibleâ.
Empirically, I think the above views are shared by many other people in EA
Including two of the main founders of the movement
MacAskill wrote a thesis and book on moral uncertainty (though I donât know his precise stance on giving weight to non-consequentialist views)
Ord included discussion of the previously mentioned 5 perspectives in his book, and has given indication that he genuinely sees some force in the ones other than present and future
These views also seem in line with the âlong reflectionâ idea that both of those people see as quite important
For long-reflection-related reasons, Iâd actually be quite concerned about the idea that we should, at this stage of (in my view) massive ignorance, totally confidently commit to the ideas of consequentialism and welfarism
Though one could support the idea of the long reflection while being certain about consequentialism and welfarism
Also, Beckstead seemed open to non-consequentialism in a recent talk at the SERI conference
Relatedly, I think many effective altruists put nontrivial weight on the idea that they should abide by certain deontological constraints/âduties, and not simply because that might be a good decision procedure for implementing utilitarianism in practice
Maybe the same is true in relation to virtue ethics, but Iâm not sure
I think the same is at least somewhat true with regards to the âpastâ-focused moral foundation Ord mentions
I find that framing emotionally resonant, but I donât give it much weight
Jaan Tallinn seemed to indicate putting some weight on that framing in a recent FLI podcast episode (search âancestors)
On 3:
EA represents/âhas a tiny minority of all the people, money, political power, etc. in the world
The other people can block our actions, counter their effects, provide us support, become inspired to join us, etc.
How much of each of those things happen will have a huge influence on the amount of good weâre ultimately able to do
One implication is that what other people are thinking and why is very decision-relevant for us
Just as many other features of the world that doesnât adopt an EA mindset (e.g., the European Union) could still be decision-relevant enough to warrant an entry
One could see speciesism as a more extreme version of this; thatâs of course not in line with an impartial welfarist mindset, but impartial welfarists may be more effective if they know about speciesism
Another implication is that being able to talk to people in ways that connect to their own values, epistemologies, etc. (or show them resources that do this, e.g. parts of the Precipice) can be very valuable for advocacy purposes
Iâll respond quickly because Iâm pressed with time.
I donât think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
I donât understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I donât see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term âwelfarismâ, but I donât think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret âothersâ in âhelping others effectivelyâ as âwhichever beings count morallyâ. Relatedly, uncertainty about what counts as a personâs wellbeing is also relevant, at least if we interpret âhelpingâ in âhelping others effectivelyâ as âimproving their wellbeingâ. So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I donât see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.
---
Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the âcoreâ of EA, as I see it, also doesnât include anti-ageing work, and maybe doesnât include a concern for suffering subroutines, but the Wiki covers those things and I think that itâs good that it does so.
(I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on âhelping othersâ. But my basic point here is that the core of EA isnât the whole of EA and isnât all that EA Wiki should cover.)
Going back to âthe EA Wiki should focus solely on considerations relevant from an EA perspectiveâ, I think that thatâs a good principle but that those considerations arenât limited to âthe core of EAâ.
---
Was the word ânotâ meant to be in there? Or did you mean to say the opposite?
If the ânotâ is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.
Maybe youâre just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ordâs book? (Btw, I do now recognise that I made a mistake in my previous commentâI wrote as if âhelping othersâ meant the focus must be welfarist and impartial, which is incorrect.)
---
I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, whatever that turns out to mean. And I think we arenât in a position to rule out perspectives that donât even focus on âhelping othersâ, including virtue-ethical perspectives or cosmic significance perspectives.
I donât think Iâd want the cosmic significance thing to get its own wiki entry, but it seems fair for it to be something like 1 of 4 perspectives that a single entry covers, and in reality emphasised much less in that entry than 1 of the other 4 things (the present-focused perspective), especially if that entry is applying these perspectives to a topic many EAs care about anyway.
---
Your point 3 sounds right to me. I think I should retract the âadvocacyâ-focused part of my previous comment.
But I think the âunderstanding these other actorsâ part still seems to me like a good reason to include entries on things along the lines of moral views that might be pretty foreign to EA (e.g., speciesism or the 3 not-really-helping-others perspectives Ord mentions).
---
Also, I just checked the 2019 EA survey, and apparently 70% of respondents identified with âconsequentialism (utilitarian)â, but 30% didnât, including some people identifying with virtue ethics or deontology. But Iâm not sure how relevant that is, given that they might have flavours of virtue ethics or deontology that are still quite distinct from the related perspectives Ord mentions.
---
(Apologies if the amount Iâve written gave a vibe of me trying to batter you into giving up or somethingâitâs more just that itâd take me longer to be concise.)
(Typing from my phone; apologies for any typos.)
Thanks for the reply. There are a bunch of interesting questions Iâd like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesnât turn out to be an obstacle for agreeing on this particular issue.
Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I donât have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably wonât be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, Iâve gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction. Iâm still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Great, I like the name.