Non-additive axiologies in large worlds. Summary: With large background (e.g. unaffected) populations, average utilitarianism, and some kinds of egalitarian and prioritarian theories reduce to additive theories, i.e. basically utilitarianism. Geometric rank-discounted utilitarianism reduces to maximin instead. (That being said, this doesnât imply we should maximize expected total utility, since it doesnât rule out risk-aversion.)
So, if your population axiology is representable by a single (continuous and impartial) real-valued function of utilities for finite populations (so excluding some person-affecting views), it seems hard to avoid totalism.
Also, I think such views (or utilitarianism) but with deontological constraints are covered by existing interventions; you can just pick among the recommended ones that donât violate any constraints, and I expect that most donât.
Suffering-focused ethics was also already mentioned.
Still, these are only slight variations of total utilitarianism or even special cases.
GPIâs other research on decision theory and cluelessness (deep uncertainty, Knightian uncertainty), offering and analyzing alternatives and adjustments to Bayesian expected value maximization, which is usually assumed in EA. I think theyâre aiming for a more epistemically justified approach, and based on this paper and this paper, it seems like there arenât any very satisfactory approaches.
And there are of course critiques of EA, especially by leftists, by animal rights advocates (for our welfarism) and for neglecting large scale systemic change.
On how risk- and uncertainty-aversion should arguably affect EA decisions, this was also this talk hosted by GPI, by Lara Buchak.
(Iâm mentioning that because it seems relevant, not necessarily because I agreed with the talk or with the basic idea that we should take intrinsic risk- or uncertainty-aversion seriously.)
Thanks for this list! I appreciate the Effective Justice paper because it: (1) articulates a deontological version of effective altruism and (2) shows how one could integrate the ideas of EA and justice. Iâve been trying to do the second thing for a while, although as a pure consequentialist I focus more on distributive justice, so this paper is inspiring for me.
this doesnât imply we should maximize expected total utility, since it doesnât rule out risk-aversion
What do you mean by this? Isnât risk aversion just a fact about the utility function? You can maximize expected utility no matter how the utility function is shaped.
Ah, we use utility in two ways, the social welfare function whose expected value you maximize, and the welfares of individuals on which your social welfare function depends. You can be a risk-averse utilitarian, for example, with a social welfare function like f(âiui), where the ui are the individual utilities/âwelfares and f:RâR is nondecreasing and concave.
An example function f, or an example where someone actually recommended or used a particular function f?
I donât know of any of the latter, but using an increasing and bounded f has come up in some discussions about infinite ethics (although it couldnât be concave towards ââ). I discuss bounded utility functions here.
An example function is 1âeâx. See this link for a graph. Itâs strictly increasing and strictly concave everywhere, and bounded above, but not below.
For what itâs worth, Christian Tarsney from GPI has looked at other aggregative views:
Average Utilitarianism Implies Solipsistic Egoism. Summary: average utilitarianism and rank-discounted utilitarianism reduce to egoism due to the possibility of solipsism. Might also apply to variable value theories, depending on the factors. See also the earlier The average utilitarianâs solipsism wager by Caspar Oesterheld.
Non-additive axiologies in large worlds. Summary: With large background (e.g. unaffected) populations, average utilitarianism, and some kinds of egalitarian and prioritarian theories reduce to additive theories, i.e. basically utilitarianism. Geometric rank-discounted utilitarianism reduces to maximin instead. (That being said, this doesnât imply we should maximize expected total utility, since it doesnât rule out risk-aversion.)
So, if your population axiology is representable by a single (continuous and impartial) real-valued function of utilities for finite populations (so excluding some person-affecting views), it seems hard to avoid totalism.
Also, I think such views (or utilitarianism) but with deontological constraints are covered by existing interventions; you can just pick among the recommended ones that donât violate any constraints, and I expect that most donât.
Suffering-focused ethics was also already mentioned.
Still, these are only slight variations of total utilitarianism or even special cases.
Some other works and authors exploring other views and their relationship to EA or EA concepts:
Teruji Thomas, âThe Asymmetry, Uncertainty, and the Long Termâ (EA Forum post)
Phil Torres (overview of focus, publications, popular media writing, EA Forum account), who works on x-risks, but I think believe in virtue ethics, and is critical of total utilitarianism, longtermism and EAâs neglect of social justice.
Roger Crisp and Theron Pummer, âEffective Justiceâ, discussing âEffective Justice, a possible social movement that would encourage promoting justice most effectively, given limited resourcesâ
Open Phil works on causes that donât receive that much attention within the rest of EA.
Johann Frick, âOn the Survival of Humanityâ (pdf), discussing the âfinal value of humanityâ, separate from the (aggregate) value of individuals.
Hilary Greaves, William MacAskill, âThe case for strong longtermismâ (discusses risk-aversion in 4.2)
GPIâs other research on decision theory and cluelessness (deep uncertainty, Knightian uncertainty), offering and analyzing alternatives and adjustments to Bayesian expected value maximization, which is usually assumed in EA. I think theyâre aiming for a more epistemically justified approach, and based on this paper and this paper, it seems like there arenât any very satisfactory approaches.
Some less formal writing:
John Halstead, âThe asymmetry and the far futureâ
Gregory Lewis, âThe person-affecting value of existential risk reductionâ
Alex HT, âIf you value future people, why do you consider near term effects?â, and the discussion there
And there are of course critiques of EA, especially by leftists, by animal rights advocates (for our welfarism) and for neglecting large scale systemic change.
On how risk- and uncertainty-aversion should arguably affect EA decisions, this was also this talk hosted by GPI, by Lara Buchak.
(Iâm mentioning that because it seems relevant, not necessarily because I agreed with the talk or with the basic idea that we should take intrinsic risk- or uncertainty-aversion seriously.)
Thanks for this list! I appreciate the Effective Justice paper because it: (1) articulates a deontological version of effective altruism and (2) shows how one could integrate the ideas of EA and justice. Iâve been trying to do the second thing for a while, although as a pure consequentialist I focus more on distributive justice, so this paper is inspiring for me.
Tangent:
What do you mean by this? Isnât risk aversion just a fact about the utility function? You can maximize expected utility no matter how the utility function is shaped.
Ah, we use utility in two ways, the social welfare function whose expected value you maximize, and the welfares of individuals on which your social welfare function depends. You can be a risk-averse utilitarian, for example, with a social welfare function like f(âiui), where the ui are the individual utilities/âwelfares and f:RâR is nondecreasing and concave.
Hm, Iâve never seen the use of $f$ like that. Can you point to an example?
An example function f, or an example where someone actually recommended or used a particular function f?
I donât know of any of the latter, but using an increasing and bounded f has come up in some discussions about infinite ethics (although it couldnât be concave towards ââ). I discuss bounded utility functions here.
An example function is 1âeâx. See this link for a graph. Itâs strictly increasing and strictly concave everywhere, and bounded above, but not below.
Yes, I meant an example of someone using f in this way. It doesnât seem to be standard in welfare economics.