Non-additive axiologies in large worlds. Summary: With large background (e.g. unaffected) populations, average utilitarianism, and some kinds of egalitarian and prioritarian theories reduce to additive theories, i.e. basically utilitarianism. Geometric rank-discounted utilitarianism reduces to maximin instead. (That being said, this doesn’t imply we should maximize expected total utility, since it doesn’t rule out risk-aversion.)
So, if your population axiology is representable by a single (continuous and impartial) real-valued function of utilities for finite populations (so excluding some person-affecting views), it seems hard to avoid totalism.
Also, I think such views (or utilitarianism) but with deontological constraints are covered by existing interventions; you can just pick among the recommended ones that don’t violate any constraints, and I expect that most don’t.
Suffering-focused ethics was also already mentioned.
Still, these are only slight variations of total utilitarianism or even special cases.
GPI’s other research on decision theory and cluelessness (deep uncertainty, Knightian uncertainty), offering and analyzing alternatives and adjustments to Bayesian expected value maximization, which is usually assumed in EA. I think they’re aiming for a more epistemically justified approach, and based on this paper and this paper, it seems like there aren’t any very satisfactory approaches.
And there are of course critiques of EA, especially by leftists, by animal rights advocates (for our welfarism) and for neglecting large scale systemic change.
On how risk- and uncertainty-aversion should arguably affect EA decisions, this was also this talk hosted by GPI, by Lara Buchak.
(I’m mentioning that because it seems relevant, not necessarily because I agreed with the talk or with the basic idea that we should take intrinsic risk- or uncertainty-aversion seriously.)
Thanks for this list! I appreciate the Effective Justice paper because it: (1) articulates a deontological version of effective altruism and (2) shows how one could integrate the ideas of EA and justice. I’ve been trying to do the second thing for a while, although as a pure consequentialist I focus more on distributive justice, so this paper is inspiring for me.
this doesn’t imply we should maximize expected total utility, since it doesn’t rule out risk-aversion
What do you mean by this? Isn’t risk aversion just a fact about the utility function? You can maximize expected utility no matter how the utility function is shaped.
Ah, we use utility in two ways, the social welfare function whose expected value you maximize, and the welfares of individuals on which your social welfare function depends. You can be a risk-averse utilitarian, for example, with a social welfare function like f(∑iui), where the ui are the individual utilities/welfares and f:R→R is nondecreasing and concave.
An example function f, or an example where someone actually recommended or used a particular function f?
I don’t know of any of the latter, but using an increasing and bounded f has come up in some discussions about infinite ethics (although it couldn’t be concave towards −∞). I discuss bounded utility functions here.
An example function is 1−e−x. See this link for a graph. It’s strictly increasing and strictly concave everywhere, and bounded above, but not below.
For what it’s worth, Christian Tarsney from GPI has looked at other aggregative views:
Average Utilitarianism Implies Solipsistic Egoism. Summary: average utilitarianism and rank-discounted utilitarianism reduce to egoism due to the possibility of solipsism. Might also apply to variable value theories, depending on the factors. See also the earlier The average utilitarian’s solipsism wager by Caspar Oesterheld.
Non-additive axiologies in large worlds. Summary: With large background (e.g. unaffected) populations, average utilitarianism, and some kinds of egalitarian and prioritarian theories reduce to additive theories, i.e. basically utilitarianism. Geometric rank-discounted utilitarianism reduces to maximin instead. (That being said, this doesn’t imply we should maximize expected total utility, since it doesn’t rule out risk-aversion.)
So, if your population axiology is representable by a single (continuous and impartial) real-valued function of utilities for finite populations (so excluding some person-affecting views), it seems hard to avoid totalism.
Also, I think such views (or utilitarianism) but with deontological constraints are covered by existing interventions; you can just pick among the recommended ones that don’t violate any constraints, and I expect that most don’t.
Suffering-focused ethics was also already mentioned.
Still, these are only slight variations of total utilitarianism or even special cases.
Some other works and authors exploring other views and their relationship to EA or EA concepts:
Teruji Thomas, ‘The Asymmetry, Uncertainty, and the Long Term’ (EA Forum post)
Phil Torres (overview of focus, publications, popular media writing, EA Forum account), who works on x-risks, but I think believe in virtue ethics, and is critical of total utilitarianism, longtermism and EA’s neglect of social justice.
Roger Crisp and Theron Pummer, ‘Effective Justice’, discussing “Effective Justice, a possible social movement that would encourage promoting justice most effectively, given limited resources”
Open Phil works on causes that don’t receive that much attention within the rest of EA.
Johann Frick, ‘On the Survival of Humanity’ (pdf), discussing the “final value of humanity”, separate from the (aggregate) value of individuals.
Hilary Greaves, William MacAskill, ‘The case for strong longtermism’ (discusses risk-aversion in 4.2)
GPI’s other research on decision theory and cluelessness (deep uncertainty, Knightian uncertainty), offering and analyzing alternatives and adjustments to Bayesian expected value maximization, which is usually assumed in EA. I think they’re aiming for a more epistemically justified approach, and based on this paper and this paper, it seems like there aren’t any very satisfactory approaches.
Some less formal writing:
John Halstead, ‘The asymmetry and the far future’
Gregory Lewis, ‘The person-affecting value of existential risk reduction’
Alex HT, ‘If you value future people, why do you consider near term effects?’, and the discussion there
And there are of course critiques of EA, especially by leftists, by animal rights advocates (for our welfarism) and for neglecting large scale systemic change.
On how risk- and uncertainty-aversion should arguably affect EA decisions, this was also this talk hosted by GPI, by Lara Buchak.
(I’m mentioning that because it seems relevant, not necessarily because I agreed with the talk or with the basic idea that we should take intrinsic risk- or uncertainty-aversion seriously.)
Thanks for this list! I appreciate the Effective Justice paper because it: (1) articulates a deontological version of effective altruism and (2) shows how one could integrate the ideas of EA and justice. I’ve been trying to do the second thing for a while, although as a pure consequentialist I focus more on distributive justice, so this paper is inspiring for me.
Tangent:
What do you mean by this? Isn’t risk aversion just a fact about the utility function? You can maximize expected utility no matter how the utility function is shaped.
Ah, we use utility in two ways, the social welfare function whose expected value you maximize, and the welfares of individuals on which your social welfare function depends. You can be a risk-averse utilitarian, for example, with a social welfare function like f(∑iui), where the ui are the individual utilities/welfares and f:R→R is nondecreasing and concave.
Hm, I’ve never seen the use of $f$ like that. Can you point to an example?
An example function f, or an example where someone actually recommended or used a particular function f?
I don’t know of any of the latter, but using an increasing and bounded f has come up in some discussions about infinite ethics (although it couldn’t be concave towards −∞). I discuss bounded utility functions here.
An example function is 1−e−x. See this link for a graph. It’s strictly increasing and strictly concave everywhere, and bounded above, but not below.
Yes, I meant an example of someone using f in this way. It doesn’t seem to be standard in welfare economics.