Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo
kokotajlod
I have a gift wish list and I’ve put donations to EA charities on it. I think this is in general a great idea, though I can see why many people might be uncomfortable with it:
Gift-giving is a sacred ritual for some people, something we do to bond as family/friends, and a little “treat yourself” moment that happens once or twice a year. There are good psychological reasons behind it, in other words, and it is not clear that giving money to charity accomplishes all the same goals. The spectre of the “altruist who is so committed that they don’t have a life anymore” looms.
I think the response to this is to acknowledge the truth behind it, but then point out that we are a very long way from that extreme “don’t have a life anymore” situation. The status quo is currently zero charitable donations on the holidays; surely it won’t hurt much to change that a bit. Indeed, by giving something to charity at the same time that we bond and treat ourselves, we might actually improve the bonding and the treating.
Yep. Any ideas what such an other dimension might be? (There are of course the “normal” other dimensions, like average well-being, that are included in the calculation of utilons.)
If values are chosen, not discovered, then how is the choice of values made?
Do you think the choice of values is made, even partially, even implicitly, in a way that involves something that fits the loose definition of a value—like “I want my values to be elegant when described in english” or “I want my values to match my pre-theoretic intuitions about the kinds of cases that I am likely to encounter?” Or do you think that the choice of values is made in some other way?
I too think that values are chosen, but I think that the choice involves implicit appeal to “deeper” values. These deeper values are not themselves chosen, on pain of infinite regress. And I think the case can be made that these deeper values are complex, at least for most people.
I found this very illuminating, thanks!
Nitpick: You say “There are altruistic activities which fall outside this grouping – for example, working to improve biodiversity for its own sake. But these don’t improve anyone’s well-being, and so fall outside the scope of effective altruism,” but I thought EA was defined more broadly than that. My understanding of EA is that if you really think that e.g. preserving biodiversity for its own sake is more worthwhile than the other causes then that’s fine, EA can help you find the most effective way to do that. I would have said that the reason why non-well-being-improving causes aren’t part of the EA Big Three is because very few people think those causes are more urgent than well-being-improving causes in the first place. Thoughts?
I for one would DEFINITELY use a quantitative model like this one. If you need incentive to think more and develop a more sophisticated model and then explain and justify it in a new post… well, I’d love it if you did that.
“the resulting world will be a global (2) melting pot ruled by suffering-maximizing Shariah law.”
This seems extremely implausible to me. Historically, assimilation and globalization has been the norm. Also, Shariah isn’t even implemented in many Islamic countries, why would it be implemented in e.g. 2050 Britain?
“That’s a worse existential risk than pandemics or climate change; in fact it would be worse than human extinction.”
Hell no! Standards of living even in Saudi Arabia are probably better than they’ve been in most places for most of human history, and things are only going to get better.
On a more abstract level: It really seems like you are exaggerating the danger here. Since the danger is a particular culture/religious group, that’s especially insensitive & harmful.
You might say “I agree that the odds of this nightmare scenario happening are very small, but because the scenario is so bad, I think we should still be concerned about it.” I think that when we start considering odds <1% of sweeping cultural change, then we ought to worry about all sorts of other contenders in that category too. Communism could revive. A new, fiery religion could appear. World War Three could happen. So many things which would be worse, and more likely, then the scenario you are considering.
I agree that it’s dangerous to generalize from fictional evidence, BUT I think it’s important not to fall into the opposite extreme, which I will now explain...
Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare “This is what morality is!” and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).
The danger, which I think is the opposite danger to the one you identified, is that people “bite the bullet” and say “I’m sticking with my principles. I guess what seems abhorrent isn’t abhorrent after all; I guess what seems good isn’t good after all.”
In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.
In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it’s almost certainly not simple enough to fit on a t-shirt.
A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don’t hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.
[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn’t generalizing based on fictional evidence, it was rejecting a generalization.]
I’ve struggled with similar concerns. I think the things EA’s push for are great, but I do think that we are more ideologically homogeneous than we should ideally be. My hope is that as more people join, it will become more “big tent” and useful to a wider range of people. (Some of it is already useful for a wide range of people, like the career advice.)
I second this! I’m one of the many people who think that maximizing happiness would be terrible. (I mean, there would be worse things you could do, but compared to what a normal, decent person would do, it’s terrible.)
The reason is simple: when you maximize something, by definition that means being willing to sacrifice everything else for the sake of that thing. Depending on the situation you are in, you might not need to sacrifice anything else; in fact, depending on the situation, maximizing that one thing might lead to lots of other things as a bonus—but in principle, if you are maximizing something, then you are willing to sacrifice everything else for the sake of it. Justice. Beauty. Fairness. Equality. Friendship. Art. Wisdom. Knowledge. Adventure. The list goes on and on. If maximizing happiness required sacrificing all of those things, such that the world contained none of them, would you still think it was the right thing to do? I hope not.
(Moreover, based on the laws of physics as we currently understand them, maximizing happiness WILL require us to sacrifice all of the things mentioned above, except possibly Wisdom and Knowledge, and even they will be concentrated in one being or kind of being.)
This is a problem with utilitarianism, not EA, but EA is currently dominated by utilitarians.
I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they’ve usually thought about the situation more carefully than people who just go with their intuition. I’m not saying people should just go with their intuition.
I’m saying that we don’t have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn’t lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won’t work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts—this bias is not irrational.)
I agree with Owen. I don’t have anything to add to what’s been said, other than a response to the strongest reason against having that norm: It only conflicts with the norm of “do what’s most effective” if it truly is more effective to donate to one’s own employer. But because of the signaling/weirdness reasons (and, yes, the bias) that doesn’t seem to be true. We’re sophisticated enough that we can have a hierarchy of norms, with “do what’s most effective” at the top and “don’t donate to your employer unless there’s a special circumstance” as a lower norm—as a helpful heuristic/guideline.
How much money is saved from taxes by foregoing salary? If it’s at least 20% of the donation then I might change my mind.
This is great. Please do it again next year.
Anyone have thoughts/response to this critique of Effective Animal Altruism?
“Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings.”
I’m pretty sure this is false. Superintelligent singletons that don’t specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They’ll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they’ll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.
If you disagree, I’d love to hear that you do—because I’m thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.
And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.
Sure, sorry for the delay.
The ways that I envision suffering potentially happening in the future are these: —People deciding that obeying the law and respecting the sovereignty of other nations is more important than preventing the suffering of people inside them —People deciding that doing scientific research (simulations are an example of this) is well worth the suffering of the people and animals experimented on —People deciding that the insults and microagressions that affect some groups are not as bad as the inefficiencies that come from preventing them —People deciding that it’s better to have a few lives without suffering than many many many lives with suffering (even when the many lives are all still all things considered good.) —People deciding that AI systems should be designed in ways that make them suffer in their daily jobs, because it’s most efficient that way.
Utilitarianism comes down pretty strongly in favor of these decisions, at least in many cases. My guess is that in post-scarcity conditions, ordinary people will be more inclined to resist these decisions than utilitarians. The big exception is the sovereignty thing; in those cases I think utilitarians will lead to less suffering than the average humans. But those cases will only happen for a decade or so and will be relatively small-scale.
Thanks for this! Even within EA I think there’s a need for more brainstorming of different cause areas, and you’ve presented a well-researched case for this one. I am tentatively convinced!
What do you think is the best counterargument? That is, what’s the best reason to think that maybe this isn’t as tractable/neglected/important as you think?
I think the biggest concern (for me) is whether or not the research on the matter is solid. Does physical punishment cause worse outcomes, or does it merely correlate? Etc. This is important both for determining how serious the problem is, and for determining how tractable it is (because without research to back up our claims, it will be hard to convince anyone to change.) I haven’t looked into it myself of course, but I’m glad you have.
That second quote in particular seems to be a good example of what some might call measurability bias. Understandable, of course—it’s hard to give out a prize on the basis of raw hunches—but nevertheless we should work towards finding ways to avoid it.
Kudos to OPP for being so transparent in their thought process though!
Interesting. I’m a moral anti-realist who also focuses on suffering, but not to the extent that you do (e.g. not worrying that much about suffering at the level of fundamental physics.) I would have predicted that theoretical arguments were what convinced you to care about fundamental physics suffering, not any sort of visceral feeling.
This is a great list, thanks! The software patent reform idea was surprising to me, but in a good way.
You say a lot about these four causes; what about the rest? You’ve said a bit (though not in so many words) about why you don’t go in for x-risk reduction (you want to make a difference in the next few decades) but what about e.g. immigration system reform, justice system reform, and pandemic prevention?