That Effective Altruists, implicitly if not explicitly, nearly always assume a single moral epistemology: some version of utilitarianism. It is only one of very many plausible registers of human value, whose prominence in the Anglophone academy has long waned post-Rawls (nevermind on the continent). I find the fact that this is a silent unanimity, tacit but never raised to the level of explicit discussion, doubly problematic.
I say this as someone who completely rejects utilitarianism, but recognises the obvious and ecumenical value in gauging high-utility giving opportunities and donating accordingly, i.e. as an analytic proxy for interpersonal comparisons, which can guide my (non-utilitarian) want to maximally remedy unnecessary human indigence.
A more accurate characterization, I think, is to say that many or most EAs are consequentialists; utilitarianism is a more specific position that only a subset of consequentialists (and EAs) endorse.
Note that about one quarter of respondents in the recent PhilPapers survey accept or lean towards consequentialism; the remaining three quarters are roughly equally divided between those who accept or lean towards either deontology or virtue ethics, and those who endorse some other moral position. So I think you are exaggerating a bit the tension between the moral views of EAs and those of professional philosophers.
Finally, consequentialism has a feature that makes it unique among rival plausible moral views, namely, that all such views agree that good outcomes are at least part of what matters morally. Consequentialists take the further step of claiming that good outcomes are the only thing that matters. (By contrast, there is no component in deontology of virtue ethics that is shared by all other rival views, other than the consequentialist component.) It follows from this feature that research on what consequentialism implies has, in principle, relevance for all other theories, since such theories could be understood as issuing requirements that coincide with those of consequentialism except when they come into conflict with other requirements that they may issue (e.g., for some forms of deontology, you should maximize good unless this violates people’s rights, so when no rights are violated these theories imply that you should act as a consequentialist).
In modern discourse, varieties of consequentialism and utilitarianism have family resemblance sufficient to warrant their interchange in utterance, in my opinion. If you think otherwise, and mark the relevant function of their distinction, I will observe it.
As for the substance of your point:
(i) in terms of its marginality, 23% (third out of four, one long dead) is appreciable but hardly impressive, given its absence from the neighbouring, larger field of political philosophy, to which I alluded (which, in the poll you cite, doesn’t include utilitarianism as an option). Moreover, if you look at the normative books achieving most (top 15) citations in post-war Anglophone philosophy, utilitarianism is absent: Rawls’ A Theory of Justice (26,768), Dworkin’s Taking Rights Seriously (7,892), MacIntyre’s After Virtue (6,579), Rawls’ Political Liberalism (6,352), Nozick’s Anarchy, State, and Utopia (6,246). The first possible utilitarian is all the way down at 30, at Parfit’s Reasons and Persons, with just 2,972 citations (which no one would ever call utilitarian, and which is only very partially ethically concerned at that). That is to say, liberal egalitarianism (Rawls seconded by Dworkin) is completely dominant, with Aristotelianism (MacIntyre) and libertarianism (Nozick) trailing. Of course, most citations of MacIntyre probably affirm his positive argument of the failure of the Enlightenment project, and reject his substitute reversion to Aristotelianism. In that sense, it might even be a two-horse race (although, again, it’s not really a race: liberal egalitarianism boasts over 40,000 citations between the three works above, libertarianism just 6,000). I should also add that the other lead works are not favourable to the whole enterprise of ethics: Wittgenstein, Rorty, Kuhn and so forth. If you allow the continent, Foucault and Sartre shoot to the top and below Rawls respectively, at the very least, I imagine (Beauvoir’s The Second Sex probably ranks as well).
(ii) I agree that researching optimum means of bringing about ones preferred unit of consequence can well integrate with a wider plurality of values; my issue is internal to the movement however, as I have discussed above with some elaboration
Your original claim concerned moral philosophy, but the evidence you provide in your latest comment predominantly concerns political philosophy. Consequentialism (a moral view) is compatible with liberalism (a political view), so evidence for the popularity of liberalism is not itself evidence for the unpopularity of consequentialism.
Furthermore, a representative poll where professional philosophers can state their preferred moral views directly seems to be a better measure of the relative popularity of those views in the philosophy profession than citation counts of books published over a given time period. The latter may be relied upon as an imperfect proxy for the former in the absence of poll data, but their evidential relevance diminishes considerably once such data becomes available.
Note, too, that using your criterion we should conclude that falsificationism—advocated in Conjectures and Refutations and Scientific Knowledge—is the dominant position in philosophy of science, when it is in fact moribund. Similarly, that ranking would misleadingly suggest that eliminative materialism—advocated in Consciousness Explained—is the dominant view in philosophy of mind, when this isn’t at all the case. In fact, many if not most of the books cited in that ranking represent positions that have largely fallen out of favor in contemporary analytic philosophy; this is at least the case with Kuhn, MacIntyre, Ryle, Rorty, Searle and maybe Fodor, besides Popper and Dennett. In addition, owing to discrepancies in the number of philosophers who work in different philosophical areas and the popularity of some of these areas in disciplines outside philosophy, the poll grossly overrepresents some areas (political philosophy, philosophy of science, philosophy of mind) and underrepresents others of at least comparable importance (metaphysics, epistemology, normative ethics), strongly suggesting that it is particularly ill-suited for comparisons spanning multiple areas (such as one involving both normative ethics and political philosophy) and further strengthening the case for relying on poll data over citation counts.
Let me however highlight that I agree with you that the high prevalence of consequentialists in the EA movement is a striking fact that raises various concerns and certainly deserves further thought and study.
I’m another effective altruist who is explicitly not a utilitarian. Having followed discussion on, e.g., the ‘Effective Altruists’ Facebook group for a long time, I’ve been part of discussions over whether effective altruism is tantamount to utilitarianism arise, and non-utilitarians come out of the wormwood. It hasn’t happened very often though.
I like the points raised from the link Josh You provided below.
If virtue ethics, and deontology, also both contain a quintessential consequentialist element, then maybe they can just be considered convoluted variants of consequentialism with other fundamental principles installed along with it. If so, perhaps effective altruism can be considered another set of (frameworks for) ethics involving consequentialism, but also valuing other things. Building a framework like that might be necessary for humans because we weren’t built to be very effective utilitarians. I believe effective altruism was started by lots of different types of utilitarians, but also non-philosophers who were affected by principles and heuristics effective altruists are designing.
William Macaskill makes a few good points here about why EA does not rely on utilitarianism. It’s true that a lot of EAs are utilitarian, but I’ve seen plenty of discussions on normative ethics among EA circles, so I wouldn’t describe it as a silent unanimity.
I would, as I already have, readily admit that EA is of ecumenical moral interest. Its practitioners, however, are overwhelmingly of a singular stripe. I have certainly never heard it discussed, having followed and somewhat intermingled with the community for some time.
I think it’s fair to say that effective altruists don’t discuss “the fact that they’re predominantly utilitarian” very much and that might seem kind of sinister on the surface but I’m not quite sure how they’re supposed to discuss this topic. They could do a mea culpa and apologise for their lack of philosophical diversity but this seems inappropriate. Alternatively, they could analyse utilitarianism in detail, which also seems wrong. What they have done is make a few public statements that in-principle, EA is more inclusive than that, which seems like a good first step. Is there much more that urgently needs to be done?
[I rearranged this to put the last paragraph first, because it gives the most concise and direct attention to my point of concern]
Let me put the question of tactics, with simplification, this way: insofar as you admit that optimising units of consequences is a subset of the panoply of moral obligations one faces, two things appear true. (i) externally, an organisation claiming merely to evaluate the best means of increasing valuable units of consequence per donation appears unproblematic; it facilitates your meeting of part of your moral obligations; (ii) an organisation internally operating, across its management, personnel and dissemination, with the sole goal of maximising valuable units of consequence per available resource, excludes the full range of the human values you recognise. Note that something similar holds for the internal composition of the movement. That is to say, while the movement might outwardly facilitate value pluralism, internally in organisation and composition, it abides by an almost singular logic. That can be extremely alienating for someone who doesn’t share that world-view, like myself.
I don’t encounter it as sinister in the slightest. I feel respondents are running away with the possible allusions or intended implications of my post. The EA community is seething with a very particular and on the whole homogeneous identity, a caricature of which might be drawn thus: a rigorous concern with instrumental rationality, with conforming available techniques and resources with given ends; an associated, marked favouring of analytically tractable meads/ends; and an unsophisticated intuitionistic or simply assumed utilitarianism, augmented in a complementary naturalistic world-view.
There is a whole lot to value there, exemplified well enough in the movement’s results. I do find two things alienating, however: the rationalization of the whole human experience, such that one is merely a teleological vessel to the satisfaction of the obvious and absolute good of benefits over costs (which for at least the few ‘professional’ members of the movement I have encountered, sits squarely alongside neoclassical economic orthodoxy); and the failure to ever talk about or admit human values other than the preferred unit of consequence.
I should stress immediately, contrary to the sentiment of your (generous) reply, these are largely experiences of individuals. I occasionally find that it contaminates analysis itself: such as in inter-generational comparisons (i.e. FHI’s straight-faced contemplation of the value of totalitarianism in guarding against xrisk), or tactical questions of how to best disseminate EA (i.e. again, in caricature: ‘say and do whatever most favourably brings about the desired reaction’). But for the most part, it does not make donor-relevant analysis problematic for me.
I want to say two things then: (i) that I find something problematic in an absolutising rationalization without great reflection; with being highly adept in means, without giving pause to properly consider ends. (ii) with the dominance this tendency has internal to the movement. (i) is a question of personal world-view adequacy, (ii) is one of organisational adequacy. Obviously I don’t expect those affirming (i) or its cognates to agree, but I do think (ii) has significance regardless of whether one observes or rejects it. Namely, for the idea, suggested in this thread, that the movement can both present itself as only attempting to satisfy an important subset of possible moral values, while being internally monological. You might readily accept this, but it is consequential for the limits of the movement’s membership at least.
*to repeat thrice for want to avoid misunderstanding and too heavy a flurry of down-votes, I readily admit that the study of maximising favoured consequences is of ecumenical interest, and is sufficient in itself to warrant its organisational study.
I partially agree here. The parts that I find easiest to agree with relate to exclusion of none utilitarians. I think it’s important that people who are not utilitarian can enter effective altruist circles and participate in discussions. I think it also might be good for effective altruists to pull back from their utilitarian frame of analysis and take a more global view of how their proposals (e.g. totalitarianism as a reducer of x-risk) might be perceived from a broader value system, if for no reason other than ensuring their research remainsbof wider societal interest. FHI would argue that they already do a lot of this, for example, in his thesis, Nick Beckstead argued that he the importance of the far future goes trough on a variety of moral theories,not just classical utilitarianism. But they have some room to improve.
I find it harder to sympathize with the view that effective altruists are collecting a a certain moral perspective unreflectively. I think most have read some ethics abd metaethics and some have read more than the average philosophy major. So the ‘naive’ and simple view can be held by a sophisticated reader.
My last suggestion is that given that the focus of effective altruism is how to do good, its only natural that its earliest adopters are consequentialist. If one thinks that different value systems converge in a lot of developing world or existential risk-related problems, then it might be appropriate to focus on the ‘how’ questions rather than trying harder to pin down a more precise notion of good. As the movement grows, one hopes that the values of its constituency will broaden.
If your non-utilitarianism makes you “want to maximally remedy unnecessary human indigence”, and my utilitarianism* makes me want the same, then what is the issue? It seems that at an operational level, we both want the same thing.
It just seems obvious to me that, all other things equal, helping two people is better than helping one. If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.
*I’m not purely utilitarian, but I am when it comes to donating.
That sentence you quoted doesn’t exhaust my normativity, but marks the extent of it which motivates my interest in EA. The word ‘maximally’ is very unclear here; I mean maximally internal to my giving, not throughout every minutia of my consciousness and actions.
The issue I wanted to raise was several-fold: that very many effective altruists take as obvious and unproblematic that utilitarianism does exhaust human value, which is reinforced by the fact that almost no one speaks to this point; that it seriously effects the evaluation of outcomes (i.e. the xrisk community, including if not especially Nick Bostrom, speak with a straight-face about totalitarianism as a condition of controlling nanotechnology and artificial intelligence); and the tactics for satisfying those outcomes.
In regard to the last point, in response to a user suggesting that we should reshape our identity, presentation and justification when speaking to conservatives, in order to effectively bring them to altruism, I posted:
“I find the this kind of rationalization—subordinating ones ethics to what can effectively motivate people to altruism—both profoundly conservative and, to some extent, undignified and inhuman, i.e. the utility slave coming full circle to enslave their own dictate of utility maximisation.”
That kind of thinking, however, is extremely common.
In response to your second paragraph:
“It just seems obvious to me that, all other things equal, helping two people is better than helping one.”
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
“If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.”
I find this perhaps culpable of wishful thinking; insofar as it would be nice if the natural structure of the world inhered an objective morality dovetailing with my historically specific intuitions and attitudes, that doesn’t itself vindicate it as so. More often that not, the imposition of the latter on the former occurs. Something seeming obvious to oneself isn’t premise for its truth.
If you follow the history of utilitarianism, it is a history of increasing dilution, from the moral naturalism of Bentham’s conception of a unified human good psychologically motivating all human action, to Mill’s pluralising of that good, to Sidgwick’s wholesale rejection of naturalism and value commensurability, and argument that the only register of independent human valuation is mere intuition, to Moore’s final reductio of the tradition in Principia Ethica (‘morality consists in a non-natural good, whatever I feel it to be, but by the way, aesthetics and interpersonal enjoyment are far and away superior’). Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition. Hence the climb of Rawls and liberal egalitarianism to predominance in the academy; it simply better satisfies the historical values and ideology of the here and now.
My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
it seriously effects the evaluation of outcomes (i.e. the xrisk community...)
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.
That Effective Altruists, implicitly if not explicitly, nearly always assume a single moral epistemology: some version of utilitarianism. It is only one of very many plausible registers of human value, whose prominence in the Anglophone academy has long waned post-Rawls (nevermind on the continent). I find the fact that this is a silent unanimity, tacit but never raised to the level of explicit discussion, doubly problematic.
I say this as someone who completely rejects utilitarianism, but recognises the obvious and ecumenical value in gauging high-utility giving opportunities and donating accordingly, i.e. as an analytic proxy for interpersonal comparisons, which can guide my (non-utilitarian) want to maximally remedy unnecessary human indigence.
A more accurate characterization, I think, is to say that many or most EAs are consequentialists; utilitarianism is a more specific position that only a subset of consequentialists (and EAs) endorse.
Note that about one quarter of respondents in the recent PhilPapers survey accept or lean towards consequentialism; the remaining three quarters are roughly equally divided between those who accept or lean towards either deontology or virtue ethics, and those who endorse some other moral position. So I think you are exaggerating a bit the tension between the moral views of EAs and those of professional philosophers.
Finally, consequentialism has a feature that makes it unique among rival plausible moral views, namely, that all such views agree that good outcomes are at least part of what matters morally. Consequentialists take the further step of claiming that good outcomes are the only thing that matters. (By contrast, there is no component in deontology of virtue ethics that is shared by all other rival views, other than the consequentialist component.) It follows from this feature that research on what consequentialism implies has, in principle, relevance for all other theories, since such theories could be understood as issuing requirements that coincide with those of consequentialism except when they come into conflict with other requirements that they may issue (e.g., for some forms of deontology, you should maximize good unless this violates people’s rights, so when no rights are violated these theories imply that you should act as a consequentialist).
In modern discourse, varieties of consequentialism and utilitarianism have family resemblance sufficient to warrant their interchange in utterance, in my opinion. If you think otherwise, and mark the relevant function of their distinction, I will observe it.
As for the substance of your point:
(i) in terms of its marginality, 23% (third out of four, one long dead) is appreciable but hardly impressive, given its absence from the neighbouring, larger field of political philosophy, to which I alluded (which, in the poll you cite, doesn’t include utilitarianism as an option). Moreover, if you look at the normative books achieving most (top 15) citations in post-war Anglophone philosophy, utilitarianism is absent: Rawls’ A Theory of Justice (26,768), Dworkin’s Taking Rights Seriously (7,892), MacIntyre’s After Virtue (6,579), Rawls’ Political Liberalism (6,352), Nozick’s Anarchy, State, and Utopia (6,246). The first possible utilitarian is all the way down at 30, at Parfit’s Reasons and Persons, with just 2,972 citations (which no one would ever call utilitarian, and which is only very partially ethically concerned at that). That is to say, liberal egalitarianism (Rawls seconded by Dworkin) is completely dominant, with Aristotelianism (MacIntyre) and libertarianism (Nozick) trailing. Of course, most citations of MacIntyre probably affirm his positive argument of the failure of the Enlightenment project, and reject his substitute reversion to Aristotelianism. In that sense, it might even be a two-horse race (although, again, it’s not really a race: liberal egalitarianism boasts over 40,000 citations between the three works above, libertarianism just 6,000). I should also add that the other lead works are not favourable to the whole enterprise of ethics: Wittgenstein, Rorty, Kuhn and so forth. If you allow the continent, Foucault and Sartre shoot to the top and below Rawls respectively, at the very least, I imagine (Beauvoir’s The Second Sex probably ranks as well).
Reference: http://leiterreports.typepad.com/blog/2009/11/the-most-cited-books-in-postwwii-anglophone-philosophy.html
(ii) I agree that researching optimum means of bringing about ones preferred unit of consequence can well integrate with a wider plurality of values; my issue is internal to the movement however, as I have discussed above with some elaboration
Your original claim concerned moral philosophy, but the evidence you provide in your latest comment predominantly concerns political philosophy. Consequentialism (a moral view) is compatible with liberalism (a political view), so evidence for the popularity of liberalism is not itself evidence for the unpopularity of consequentialism.
Furthermore, a representative poll where professional philosophers can state their preferred moral views directly seems to be a better measure of the relative popularity of those views in the philosophy profession than citation counts of books published over a given time period. The latter may be relied upon as an imperfect proxy for the former in the absence of poll data, but their evidential relevance diminishes considerably once such data becomes available.
Note, too, that using your criterion we should conclude that falsificationism—advocated in Conjectures and Refutations and Scientific Knowledge—is the dominant position in philosophy of science, when it is in fact moribund. Similarly, that ranking would misleadingly suggest that eliminative materialism—advocated in Consciousness Explained—is the dominant view in philosophy of mind, when this isn’t at all the case. In fact, many if not most of the books cited in that ranking represent positions that have largely fallen out of favor in contemporary analytic philosophy; this is at least the case with Kuhn, MacIntyre, Ryle, Rorty, Searle and maybe Fodor, besides Popper and Dennett. In addition, owing to discrepancies in the number of philosophers who work in different philosophical areas and the popularity of some of these areas in disciplines outside philosophy, the poll grossly overrepresents some areas (political philosophy, philosophy of science, philosophy of mind) and underrepresents others of at least comparable importance (metaphysics, epistemology, normative ethics), strongly suggesting that it is particularly ill-suited for comparisons spanning multiple areas (such as one involving both normative ethics and political philosophy) and further strengthening the case for relying on poll data over citation counts.
Let me however highlight that I agree with you that the high prevalence of consequentialists in the EA movement is a striking fact that raises various concerns and certainly deserves further thought and study.
I’m another effective altruist who is explicitly not a utilitarian. Having followed discussion on, e.g., the ‘Effective Altruists’ Facebook group for a long time, I’ve been part of discussions over whether effective altruism is tantamount to utilitarianism arise, and non-utilitarians come out of the wormwood. It hasn’t happened very often though.
I like the points raised from the link Josh You provided below.
If virtue ethics, and deontology, also both contain a quintessential consequentialist element, then maybe they can just be considered convoluted variants of consequentialism with other fundamental principles installed along with it. If so, perhaps effective altruism can be considered another set of (frameworks for) ethics involving consequentialism, but also valuing other things. Building a framework like that might be necessary for humans because we weren’t built to be very effective utilitarians. I believe effective altruism was started by lots of different types of utilitarians, but also non-philosophers who were affected by principles and heuristics effective altruists are designing.
William Macaskill makes a few good points here about why EA does not rely on utilitarianism. It’s true that a lot of EAs are utilitarian, but I’ve seen plenty of discussions on normative ethics among EA circles, so I wouldn’t describe it as a silent unanimity.
I would, as I already have, readily admit that EA is of ecumenical moral interest. Its practitioners, however, are overwhelmingly of a singular stripe. I have certainly never heard it discussed, having followed and somewhat intermingled with the community for some time.
I think it’s fair to say that effective altruists don’t discuss “the fact that they’re predominantly utilitarian” very much and that might seem kind of sinister on the surface but I’m not quite sure how they’re supposed to discuss this topic. They could do a mea culpa and apologise for their lack of philosophical diversity but this seems inappropriate. Alternatively, they could analyse utilitarianism in detail, which also seems wrong. What they have done is make a few public statements that in-principle, EA is more inclusive than that, which seems like a good first step. Is there much more that urgently needs to be done?
[I rearranged this to put the last paragraph first, because it gives the most concise and direct attention to my point of concern]
Let me put the question of tactics, with simplification, this way: insofar as you admit that optimising units of consequences is a subset of the panoply of moral obligations one faces, two things appear true. (i) externally, an organisation claiming merely to evaluate the best means of increasing valuable units of consequence per donation appears unproblematic; it facilitates your meeting of part of your moral obligations; (ii) an organisation internally operating, across its management, personnel and dissemination, with the sole goal of maximising valuable units of consequence per available resource, excludes the full range of the human values you recognise. Note that something similar holds for the internal composition of the movement. That is to say, while the movement might outwardly facilitate value pluralism, internally in organisation and composition, it abides by an almost singular logic. That can be extremely alienating for someone who doesn’t share that world-view, like myself.
I don’t encounter it as sinister in the slightest. I feel respondents are running away with the possible allusions or intended implications of my post. The EA community is seething with a very particular and on the whole homogeneous identity, a caricature of which might be drawn thus: a rigorous concern with instrumental rationality, with conforming available techniques and resources with given ends; an associated, marked favouring of analytically tractable meads/ends; and an unsophisticated intuitionistic or simply assumed utilitarianism, augmented in a complementary naturalistic world-view.
There is a whole lot to value there, exemplified well enough in the movement’s results. I do find two things alienating, however: the rationalization of the whole human experience, such that one is merely a teleological vessel to the satisfaction of the obvious and absolute good of benefits over costs (which for at least the few ‘professional’ members of the movement I have encountered, sits squarely alongside neoclassical economic orthodoxy); and the failure to ever talk about or admit human values other than the preferred unit of consequence.
I should stress immediately, contrary to the sentiment of your (generous) reply, these are largely experiences of individuals. I occasionally find that it contaminates analysis itself: such as in inter-generational comparisons (i.e. FHI’s straight-faced contemplation of the value of totalitarianism in guarding against xrisk), or tactical questions of how to best disseminate EA (i.e. again, in caricature: ‘say and do whatever most favourably brings about the desired reaction’). But for the most part, it does not make donor-relevant analysis problematic for me.
I want to say two things then: (i) that I find something problematic in an absolutising rationalization without great reflection; with being highly adept in means, without giving pause to properly consider ends. (ii) with the dominance this tendency has internal to the movement. (i) is a question of personal world-view adequacy, (ii) is one of organisational adequacy. Obviously I don’t expect those affirming (i) or its cognates to agree, but I do think (ii) has significance regardless of whether one observes or rejects it. Namely, for the idea, suggested in this thread, that the movement can both present itself as only attempting to satisfy an important subset of possible moral values, while being internally monological. You might readily accept this, but it is consequential for the limits of the movement’s membership at least.
*to repeat thrice for want to avoid misunderstanding and too heavy a flurry of down-votes, I readily admit that the study of maximising favoured consequences is of ecumenical interest, and is sufficient in itself to warrant its organisational study.
I partially agree here. The parts that I find easiest to agree with relate to exclusion of none utilitarians. I think it’s important that people who are not utilitarian can enter effective altruist circles and participate in discussions. I think it also might be good for effective altruists to pull back from their utilitarian frame of analysis and take a more global view of how their proposals (e.g. totalitarianism as a reducer of x-risk) might be perceived from a broader value system, if for no reason other than ensuring their research remainsbof wider societal interest. FHI would argue that they already do a lot of this, for example, in his thesis, Nick Beckstead argued that he the importance of the far future goes trough on a variety of moral theories,not just classical utilitarianism. But they have some room to improve.
I find it harder to sympathize with the view that effective altruists are collecting a a certain moral perspective unreflectively. I think most have read some ethics abd metaethics and some have read more than the average philosophy major. So the ‘naive’ and simple view can be held by a sophisticated reader.
My last suggestion is that given that the focus of effective altruism is how to do good, its only natural that its earliest adopters are consequentialist. If one thinks that different value systems converge in a lot of developing world or existential risk-related problems, then it might be appropriate to focus on the ‘how’ questions rather than trying harder to pin down a more precise notion of good. As the movement grows, one hopes that the values of its constituency will broaden.
If your non-utilitarianism makes you “want to maximally remedy unnecessary human indigence”, and my utilitarianism* makes me want the same, then what is the issue? It seems that at an operational level, we both want the same thing.
It just seems obvious to me that, all other things equal, helping two people is better than helping one. If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.
*I’m not purely utilitarian, but I am when it comes to donating.
That sentence you quoted doesn’t exhaust my normativity, but marks the extent of it which motivates my interest in EA. The word ‘maximally’ is very unclear here; I mean maximally internal to my giving, not throughout every minutia of my consciousness and actions.
The issue I wanted to raise was several-fold: that very many effective altruists take as obvious and unproblematic that utilitarianism does exhaust human value, which is reinforced by the fact that almost no one speaks to this point; that it seriously effects the evaluation of outcomes (i.e. the xrisk community, including if not especially Nick Bostrom, speak with a straight-face about totalitarianism as a condition of controlling nanotechnology and artificial intelligence); and the tactics for satisfying those outcomes.
In regard to the last point, in response to a user suggesting that we should reshape our identity, presentation and justification when speaking to conservatives, in order to effectively bring them to altruism, I posted:
“I find the this kind of rationalization—subordinating ones ethics to what can effectively motivate people to altruism—both profoundly conservative and, to some extent, undignified and inhuman, i.e. the utility slave coming full circle to enslave their own dictate of utility maximisation.”
That kind of thinking, however, is extremely common.
In response to your second paragraph:
“It just seems obvious to me that, all other things equal, helping two people is better than helping one.”
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
“If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.”
I find this perhaps culpable of wishful thinking; insofar as it would be nice if the natural structure of the world inhered an objective morality dovetailing with my historically specific intuitions and attitudes, that doesn’t itself vindicate it as so. More often that not, the imposition of the latter on the former occurs. Something seeming obvious to oneself isn’t premise for its truth.
If you follow the history of utilitarianism, it is a history of increasing dilution, from the moral naturalism of Bentham’s conception of a unified human good psychologically motivating all human action, to Mill’s pluralising of that good, to Sidgwick’s wholesale rejection of naturalism and value commensurability, and argument that the only register of independent human valuation is mere intuition, to Moore’s final reductio of the tradition in Principia Ethica (‘morality consists in a non-natural good, whatever I feel it to be, but by the way, aesthetics and interpersonal enjoyment are far and away superior’). Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition. Hence the climb of Rawls and liberal egalitarianism to predominance in the academy; it simply better satisfies the historical values and ideology of the here and now.
My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.