Thank you, Lila, for your openness on explaining your reasons for leaving EA. It’s good to hear legitimate reasons why someone might leave the community. It’s certainly better than the outsider anti-EA arguments that do tend to misrepresent EA too often. I hope that other insiders who leave the movement will also be kind enough to share their reasoning, as you have here.
While I recognize that Lila does not want to participate in a debate, I nevertheless would like to contribute an alternate perspective for the benefit of other readers.
Like Lila, I am a moral anti-realist. Yet while she has left the movement largely for this reason, I still identify strongly with the EA movement.
This is because I do not feel that utilitarianism is required to prop up as many of EA’s ideas as Lila does. For example, non-consequentialist moral realists can still use expected value to try and maximize good done without thinking that the maximization itself is the ultimate source of that good. Presumably if you think lying is bad, then refraining from lying twice may be better than refraining from lying just once.
I agree with Lila that many EAs act too glib about deaths from violence being no worse than deaths from non-violence. But to the extent that this is true, we can just weight these differently. For example, Lila rightly points out that “violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework”. EAs should definitely take into account these extra considerations about violence.
But the main difference between myself and Lila here is that when she sees EAs not taking things like this into consideration, she takes that as an argument against EA; against utilitarianism; against expected value. Whereas I take it as an improper expected value estimate that doesn’t take into account all of the facts. For me, this is not an argument against EA, nor even an argument against expected value—it’s an argument for why we need to be careful about taking into account as many considerations as possible when constructing expected value estimates.
As a moral anti-realist, I have to figure out how to act not by discovering rules of morality, but by deciding on what should be valued. If I wanted, I suppose I could just choose to go with whatever felt intuitively correct, but evolution is messy, and I trust a system of logic and consistency more than any intuitions that evolution has forced upon me. While I still use my intuitions because they make me feel good, when my intuitions clash with expected value estimates, I feel much more comfortable going with the EV estimates. I do not agree with everything individual EAs say, but I largely agree with the basic ideas behind EA arguments.
There are all sorts of moral anti-realists. Almost by definition, it’s difficult to predict what any given moral anti-realist would value. I endorse moral anti-realism, and I just want to emphasize that EAs can become moral anti-realist without leaving the EA movement.
The way I think about violence has to do with the importance/tractability/neglectedness framework: I see it as very important but not all that tractable. I do see a lot of its importance as related to the indirect harms it causes. What does it do to a person’s family when they are assaulted or killed, or when they go to prison for violence? How does it affect their children and other children around who are forming a concept of what’s normal? As a social worker, I saw a lot of people harmed by the violence they themselves had carried out, whether as soldiers, gang members, or family members. (I think about indirect effects with more typical EA causes too—I suspect parental grief is a major cost of child mortality that we don’t pay enough attention to.)
My understanding is that the most promising interventions on large-scale violence prevention are around preventing return to war after an initial conflict, since areas that just had a war are particularly likely to have another one soon. Copenhagen Consensus considers the most effective intervention “deploy UN peacekeeping forces” which isn’t easy to influence (though there are also some others listed that seem more tractable.)
http://www.copenhagenconsensus.com/sites/default/files/CP%2B-%2BConflicts%2BFINISHED.pdf
I really like this response—thanks, Eric. I’d say the way I think about maximizing expected value is that it’s the natural thing you’ll end up doing if you’re trying to produce a particular outcome, especially a large-scale one that doesn’t hinge much on your own mental state and local environment.
Thinking in ‘maximizing-ish ways’ can be useful at times in lots of contexts, but it’s especially likely to be helpful (or necessary) when you’re trying to move the world’s state in a big way; not so much when you’re trying to raise a family or follow the rules of etiquette, and possibly even less so when the goal you’re pursuing is something like ‘have fun and unwind this afternoon watching a movie’. There my mindset is a much more dominant consideration than it is in large-scale moral dilemmas, so the costs of thinking like a maximizer are likelier to matter.
In real life, I’m not a perfect altruist or a perfect egoist; I have a mix of hundreds of different goals like the ones above. But without being a strictly maximizing agent in all walks of life, I can still recognize that (all else being equal) I’d rather spend $1000 to protect two people from suffering from violence (or malaria, or what-have-you) than spend $1000 to protect just one person from violence. And without knowing the right way to reason with weird extreme Pascalian situations, I can still recognize that I’d rather spend $1000 to protect those two people, than spend $1000 to protect three people with 50% probability (and protect no one the other 50% of the time).
Acting on preferences like those will mean that I exhibit the outward behaviors of an EV maximizer in how I choose between charitable opportunities, even if I’m not an EV maximizer in other parts of my life. (Much like I’ll act like a well-functioning calculator when I’m achieving the goal of getting a high score on a math quiz, even though I don’t act calculator-like when I pursue other goals.)
When you’ve read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you’ve seen the “Dutch book” and “money pump” effects that penalize trying to handle uncertain outcomes any other way, then you don’t see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn’t goddamn multiply.
The primitive, perceptual intuitions that make a choice “feel good” don’t handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words.
When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don’t think that their protestations reveal some deep truth about incommensurable utilities.
Part of it, clearly, is that primitive intuitions don’t successfully diminish the emotional impact of symbols standing for small quantities—anything you talk about seems like “an amount worth considering.”
And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there’s any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.
So it seems like there should be an unconditional social injunction against preferring money to life, and no “but” following it. Not even “but a thousand dollars isn’t worth a 0.0000000001% probability of saving a life.” Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.
The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.
On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.
But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from zero to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely
had lexical priority.
As Peter Norvig once pointed out, if Asimov’s robots had strict priority for the First Law of Robotics (“A robot shall not harm a human being, nor through inaction allow a human being to come to harm”) then no robot’s behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.
Whatever value is worth thinking about at all must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.
I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize—that the valuation of this one event is more complex than I know.
But that’s for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided—at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.” Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply.
It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that’s why I’m a utilitarian—at least when I am doing something that is overwhelmingly more important than my own feelings about it—which is most of the time, because there are not many utilitarians, and many things left undone.
… Also, just to be clear—since this seems to be a weirdly common misconception—acting like an expected value maximizer is totally different from utilitarianism. EV maximizing is a thing wherever you consistently care enough about your actions’ consequences; utilitarianism is specifically the idea that the thing people should (act as though they) care about is how good things are for everyone, impartially.
But often people argue against the consequentialism aspect of utilitarianism and the consequent willingness to quantitatively compare different goods, rather than arguing against the altruism aspect or the egalitarianism; hence the two ideas get blurred together a bit in the above, even though you can certainly maximize expected utility for conceptions of “utility” that are partial to your own interests, your friends’, etc.
Thank you, Lila, for your openness on explaining your reasons for leaving EA. It’s good to hear legitimate reasons why someone might leave the community. It’s certainly better than the outsider anti-EA arguments that do tend to misrepresent EA too often. I hope that other insiders who leave the movement will also be kind enough to share their reasoning, as you have here.
While I recognize that Lila does not want to participate in a debate, I nevertheless would like to contribute an alternate perspective for the benefit of other readers.
Like Lila, I am a moral anti-realist. Yet while she has left the movement largely for this reason, I still identify strongly with the EA movement.
This is because I do not feel that utilitarianism is required to prop up as many of EA’s ideas as Lila does. For example, non-consequentialist moral realists can still use expected value to try and maximize good done without thinking that the maximization itself is the ultimate source of that good. Presumably if you think lying is bad, then refraining from lying twice may be better than refraining from lying just once.
I agree with Lila that many EAs act too glib about deaths from violence being no worse than deaths from non-violence. But to the extent that this is true, we can just weight these differently. For example, Lila rightly points out that “violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework”. EAs should definitely take into account these extra considerations about violence.
But the main difference between myself and Lila here is that when she sees EAs not taking things like this into consideration, she takes that as an argument against EA; against utilitarianism; against expected value. Whereas I take it as an improper expected value estimate that doesn’t take into account all of the facts. For me, this is not an argument against EA, nor even an argument against expected value—it’s an argument for why we need to be careful about taking into account as many considerations as possible when constructing expected value estimates.
As a moral anti-realist, I have to figure out how to act not by discovering rules of morality, but by deciding on what should be valued. If I wanted, I suppose I could just choose to go with whatever felt intuitively correct, but evolution is messy, and I trust a system of logic and consistency more than any intuitions that evolution has forced upon me. While I still use my intuitions because they make me feel good, when my intuitions clash with expected value estimates, I feel much more comfortable going with the EV estimates. I do not agree with everything individual EAs say, but I largely agree with the basic ideas behind EA arguments.
There are all sorts of moral anti-realists. Almost by definition, it’s difficult to predict what any given moral anti-realist would value. I endorse moral anti-realism, and I just want to emphasize that EAs can become moral anti-realist without leaving the EA movement.
The way I think about violence has to do with the importance/tractability/neglectedness framework: I see it as very important but not all that tractable. I do see a lot of its importance as related to the indirect harms it causes. What does it do to a person’s family when they are assaulted or killed, or when they go to prison for violence? How does it affect their children and other children around who are forming a concept of what’s normal? As a social worker, I saw a lot of people harmed by the violence they themselves had carried out, whether as soldiers, gang members, or family members. (I think about indirect effects with more typical EA causes too—I suspect parental grief is a major cost of child mortality that we don’t pay enough attention to.)
My understanding is that the most promising interventions on large-scale violence prevention are around preventing return to war after an initial conflict, since areas that just had a war are particularly likely to have another one soon. Copenhagen Consensus considers the most effective intervention “deploy UN peacekeeping forces” which isn’t easy to influence (though there are also some others listed that seem more tractable.) http://www.copenhagenconsensus.com/sites/default/files/CP%2B-%2BConflicts%2BFINISHED.pdf
I really like this response—thanks, Eric. I’d say the way I think about maximizing expected value is that it’s the natural thing you’ll end up doing if you’re trying to produce a particular outcome, especially a large-scale one that doesn’t hinge much on your own mental state and local environment.
Thinking in ‘maximizing-ish ways’ can be useful at times in lots of contexts, but it’s especially likely to be helpful (or necessary) when you’re trying to move the world’s state in a big way; not so much when you’re trying to raise a family or follow the rules of etiquette, and possibly even less so when the goal you’re pursuing is something like ‘have fun and unwind this afternoon watching a movie’. There my mindset is a much more dominant consideration than it is in large-scale moral dilemmas, so the costs of thinking like a maximizer are likelier to matter.
In real life, I’m not a perfect altruist or a perfect egoist; I have a mix of hundreds of different goals like the ones above. But without being a strictly maximizing agent in all walks of life, I can still recognize that (all else being equal) I’d rather spend $1000 to protect two people from suffering from violence (or malaria, or what-have-you) than spend $1000 to protect just one person from violence. And without knowing the right way to reason with weird extreme Pascalian situations, I can still recognize that I’d rather spend $1000 to protect those two people, than spend $1000 to protect three people with 50% probability (and protect no one the other 50% of the time).
Acting on preferences like those will mean that I exhibit the outward behaviors of an EV maximizer in how I choose between charitable opportunities, even if I’m not an EV maximizer in other parts of my life. (Much like I’ll act like a well-functioning calculator when I’m achieving the goal of getting a high score on a math quiz, even though I don’t act calculator-like when I pursue other goals.)
For more background on what I mean by ‘any policy of caring a lot about strangers will tend to recommend behavior reminiscent of expected value maximization, the more so the more steadfast and strong the caring is’, see e.g. ‘Coherent decisions imply a utility funtion’ and The “Intuitions” Behind “Utilitarianism”:
… Also, just to be clear—since this seems to be a weirdly common misconception—acting like an expected value maximizer is totally different from utilitarianism. EV maximizing is a thing wherever you consistently care enough about your actions’ consequences; utilitarianism is specifically the idea that the thing people should (act as though they) care about is how good things are for everyone, impartially.
But often people argue against the consequentialism aspect of utilitarianism and the consequent willingness to quantitatively compare different goods, rather than arguing against the altruism aspect or the egalitarianism; hence the two ideas get blurred together a bit in the above, even though you can certainly maximize expected utility for conceptions of “utility” that are partial to your own interests, your friends’, etc.