Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thank you for this, Keir. I agree that some conclusions that EAs have come to are uncontroversial among non-utilitarians. And EAs have tried to appeal to non-utilitarians. Singer’s Drowning Child thought experiment does not appeal to utilitarianism. Ord and MacAskill both (while making clear they are sympathetic to total utilitarianism) try to appeal to non-utilitarians too.
However, there are some important cause prioritisation questions that can’t really be answered without committing to some philosophical framework. It’s plausible that these questions do make a real, practical difference to what we individually prioritise. So, doing EA without philosophy seems a bit like trying to do politics without ideology. Many people may claim to be doing so, but they’re still ultimately harbouring philosophical assumptions.
You bring up the comparison between donating to the opera and donating to global health as one that non-utilitarians like Sen can deal with relatively easily. But Amartya Sen is still a consequentialist and it’s notable that his close colleague (and ideological soulmate) Martha Nussbaum has recently written about wild-animal suffering, a cause which utilitarians have been concerned about for some time. Consequentialists and pluralists (as long as they include some degree of consequentialism in their thinking) can still easily prioritise. It’s less clear that pure deontologists and virtue ethicists can, without ultimately appealing to consequences.
Finally, I don’t think there’s much philosophical difference between Bill Gates and Peter Singer. Gates wrote a blurb praising Singer’s The Most Good You Can Do, and in his recent annual newsletter he said that his goal is to “give my wealth back to society in ways that do the most good for the most people”.
Hello! Thank you for such a thoughtful comment. You’re obviously right on the first point that Singer/Ord/MacAskill have tried to appeal to non-utilitarians, and I think that’s great—I just wish, I suppose, that this was more deeply culturally embedded, if that’s a helpful way to put it. (But the fact this is already happening is why I really don’t want to be too critical!)
And I fully, completely agree that you can’t do effective altruism without philosophy or making value-judgements. (Peter made a similar point to yours in a comment to my blog). But I think that what I’m trying to get at is something slightly different: I’m trying to say that at a very basic level, most moral theories can get on board with what the EA community wants to do, and while there might be disagreements between utilitarians and other theories down the line, there’s no reason they shouldn’t be able to share these common goals, nor that non-utilitarians’ contributions to EA should be anything other than net-positive by a utilitarian standard. To me that’s quite important, because I think a great benefit of effective altruism as a whole is how well it focusses the mind on making a positive marginal impact, and I would really like to see many more people adopt that kind of mindset, even if they ultimately make subjective choices much further down the line of impact-making that a pure utilitarian disagrees with. (And indeed such subjective and contentious moral choices within EA already happen, because utilitarianism doesn’t tell you straightforwardly how to e.g. decide how to weight animal welfare, for example. So I suppose I really don’t think this kind of more culturally value-plural form of EA would encounter philosophical trouble any more than EAs already do.)
On Gates and Singer’s philosophical similarities, I agree! But I think Gates wears his philosophy much more lightly than most effective altruists do, and has escaped some ire because of it, which is what I was trying to get at—although I realise this was probably unhelpfully unclear.
Thanks for your response. I don’t think we disagree on as much as I thought, then! I suppose I’m less confident than you that those disagreements down the line aren’t going to lead to the same sort of backlash that we currently see.
If we see EA as a community of individuals who are attempting to do good better (by their own lights), then while I certainly agree that the contributions of non-utilitarians are net-positive from a utilitarian perspective, we utilitarian EAs (including leaders of the movement, who some might say have an obligation to be more neutral for PR purposes) may still think it’s best to try to persuade others that our preferred causes should be prioritised even if it comes at the expense of bad PR and turning away some non-utilitarians. Given that philosophy may cause people to decisively change their views on prioritisation, spreading certain philosophical views may also be important.
I guess I am somewhat cheekily attempting to shift the burden of responsibility back onto non-utilitarians. As you say, even people like Torres are on board with the core ideas of EA, so in my view they should be engaging in philosophical and cause prioritisation debates from within the movement (as EAs do all the time, as you note) instead of trying to sabotage the entire project. But I do appreciate that this has become more difficult to do. I think it’s true that the ‘official messaging’ has subtly moved away from the idea that there are different ‘wings’ of EA (global health, animal welfare, existential risk) and toward an idea that not everyone will be able to get on board with (though I still think they should be able to, like many existing non-utilitarian EAs).
Trust seems to be important here. EAs can have philosophical and cause prioritisation disagreements while trusting that people who disagree with them are committed to doing good and are probably doing some amount of good (longtermists can think global health people are doing some good, and vice-versa). Similarly, two utilitarians can as you say disagree empirically about the relative intensity of pleasure and suffering in different species without suspecting that the other isn‘t making a good faith attempt to understand how to maximise utility. On the other hand, critics like Torres and possibly some of the others you mentioned may think that EA is actively doing harm (and/or that prominent EAs are actively evil). One way it could be doing harm is by diverting resources away from the causes they think are important (and instead of trying to argue for their causes from within the movement, they may, on consequentialist grounds, think it’s better to try to damage the movement).
All of this is to say that I think these ‘disagreements down the line’ are mostly to blame for the current state of affairs and can’t really be avoided, while conceding that ‘official EA messaging’ has also played its part (but, as a take-no-prisoners utilitarian, I’m not really sure whether that’s net-negative or not!)
A nit picking (and late) point of order I can’t resist making because it’s a pet peeve of mine, re this part:
You don’t say explicitly here that staring at the repugnant conclusion and sticking to your guns is specifically the result of being a bullet biting utilitarian, but it seems heavily implied by your framing. To be clear, this is roughly the argument in this part of the book:
-population ethics provably leads every theory to one or more of a set of highly repulsive conclusions most people don’t want to endorse
-out of these the least repulsive one (my impression is that this is the most common view among philosophers, though don’t quote me on that) is the repugnant conclusion
-nevertheless the wisest approach is to apply a moral uncertainty framework that balances all of these theories, which roughly adds up to a version of the critical level view, which bites a sandpapered down version of the repugnant conclusion as well as (editorializing a bit here, I don’t recall MacAskill noting this) a version of the sadistic conclusion more palatable and principled than the averagist one
Note that his argument doesn’t invoke utilitarianism anywhere, it just invokes the relevant impossibility theorems and some vague principled gesturing around semi-related dilemmas for person-affecting ethics. Indeed many non-utilitarians bite the repugnant conclusion bullet as well, what is arguably the most famous paper in defense of it was written by a deontologist.
I can virtually guarantee you that whatever clever alternative theory you come up with, it will take me all of five minutes to point out the flaws. Either it is in some crucial way insufficiently specific (this is not a virtue of the theory, actual actions are specific so all this does is hide which bullets the theory will wind up biting and when), or winds up biting one or more bullets, possibly different ones at different times (as for instance theories that deny the independence of irrelevant alternatives do). There are other moves in this game, in particular making principled arguments for why different theories lead to these conclusions in more or less acceptable ways, but just pointing to the counterintuitive implication of the repugnant conclusion is not a move in that game, but rather a move that is not obviously worse than any other in the already solved game of “which bullets exist to be bitten”.
Maybe the right approach to this is to just throw up our hands in frustration and say “I don’t know”, but then it’s hard to fault MacAskill, who again, does a more formalized version of essentially this rather than just biting the repugnant conclusion bullet.
Part of my pet peeve here is with discourse around population ethics, but also it feels like discourse around WWOTF is gradually drifting further away from anything I recognize from its contents. There’s plenty to criticize in the book, but to do a secondary reading skim from a few months after its release, you would think it was basically arguing “classical utilitarianism, therefore future”, which is not remotely what the book is actually like.
Thanks so much for this, great reflection!
One small comment I’d make is that Bill Gates has been hammered for the way he does philanthropy, I would argue more severely then effective altruism. Most notably by mainstream development orgs and a number of fairly high profile consipiracy theories.
But if the debacles of the last few months continue we might overtake Bill on the criticism front, but let’s hope not.
I think that like you say, EA AGI doomer longtermists might have performed one of the most botched PR job in history. Climate change advocates rightly focus on protecting the world for our grandchildren, and the effect of climate change on the poorest people being far worse. I’m not sure I’ve ever heard AGI people talking in these kind of heartstring-pulling compassionate terms. These same arguments should be made by the AI crowd, realising that the general public has different frames of reference than they do.
Thank you! That’s very interesting r.e. Gates; that wasn’t my impression at all but to be honest I may very well be living in a bubble of my own making, and I’m sure I’ve missed plenty of criticism. That said, I think I might still suggest that there’s two different kinds of criticism here: EA gets quite a bit of high-status criticism from fairly mainstream sources (academics, magazines, etc.); if Bill Gates’s criticism comes more from conspiracy loons then I would suggest it’s probably less damaging, even if it’s more voluminous. (I think both have got a lot of flak from those development orgs who were quite enjoying being complacent about whether they were actually being successful or not.)
And yes I completely agree r.e. longtermism & PR! I wrote something quite similar a couple of months ago. It seems to me that longtermism has an obvious open goal here and yet hasn’t (yet) taken it.
I agree that Gates has been heavily criticised too. This is probably because he’s a billionaire and because he’s involved himself so heavily in issues (such as the pandemic) which attract lots of attention. It might not be a coincidence, though, that there’s not much philosophical difference between Bill Gates and, say, Peter Singer. Gates wrote a blurb praising Singer’s The Most Good You Can Do, and in his recent annual newsletter he said that his goal is to “give my wealth back to society in ways that do the most good for the most people”.
If it’s true that longtermism is much more controversial than focusing on x-risks as a cause area (which can be justified according to mainstream cost-benefit analysis, as you said), then maybe we should have stuck to promoting mass market books like The Precipice instead of WWOTF! The Precipice has a chapter explicitly arguing that multiple ethical perspectives support reducing x-risk.
While I definitely think it’s correct that EA should distance itself from adopting any one moral philosophy and instead adopt a more pluralistic approach, it might still be useful to have a wing of the movement dedicated to moral philosophy. I don’t see why EA can’t be a haven for moral and political philosophers collaborating with other EA members to do the most good possible, as it might be worthwhile to focus on wide-scale systematic change and more abstract, fundamental questions such as what value is in the first place. In fact, one weakness of EA is precisely that it isn’t pluralistic in terms of the demographic of its members and how they view systematic change; for example, consider Tyler Cowen’s quote about EA’s demographic in the United States:
“But I think the demographics of the EA movement are essentially the US Democratic Party. And that’s what the EA movement over time will evolve into. If you think the existential risk is this kind of funny, weird thing, it doesn’t quite fit. Well, it will be kind of a branch of Democratic Party thinking that makes philanthropy a bit more global, a bit more effective. I wouldn’t say it’s a stupider version, but it’s a less philosophical version that’s a lot easier to sell to non-philosophers.”
If wide-scale philosophical collaboration was incorporated into EA, then I think it might be a rare opportunity for political philosophers of all stripes (e.g., libertarians, socialists, anarchists, neoliberals, etc.) to collaborate on systematic questions relating to how to do the most good. I think this is especially needed considering how polarised politics has become. Additionally, considering abstract questions relating to the fundamental nature of value would particularly help with expected value calculations that are more vague, trying to compare the value of qualitatively distinct experiences.
Hi there, and thanks for the post. I find myself agreeing a lot with what it says, so probably my biases are aligning with it, and that has to be said. I am still trying to catch up with the main branches of ethical thought and giving them a fair chance, which I think utilitarianism deserves (by instinct and inclination I am probably a very Kantian deontologist), even if it instinctively feels ‘wrong’ to me.