FWIW, I don’t find it at all surprising when people’s moral preferences contradict themselves (in terms of likely implications, as you say). I myself have many contradictory moral preferences.
Awesome, I should have checked Kelsey Piper first. Thank you!
A fictional treatment of these issues you might be interested in is the book https://en.wikipedia.org/wiki/2312_(novel) by Kim Stanley Robinson. Spacefarers are genetically distinct from Earth-dwelling humans; each planet is its own political entity.
To me, determining what will happen in the future seems less and less possible the farther we go out, to the point where I think there are no arguments that would give me a high degree of confidence in a statement about the far future like the one you put up here. For any story that supports a particular outcome, it seems there is an equally compelling story that argues against it. :)
Thanks for the perspective on dissenting views!
By accumulating resources for the future, we give increased power to whatever decision-makers in the future we bequeath these resources. (Whether these decision-makers are us in 20 years, or our descendants in 200 years.)
In a clueless world, why do we think that increasing their power is good? What if those future decision makers make a bad decision, and the increased resources we’ve given them mean the impact is worse?
In other words, if we are clueless today, why will we be less clueless in the future? One might hope cluelessness decreases monotonically over time, as we learn more, but so does the probability of a large mistake.
I found this very helpful.
This post really turned on a lightbulb for me, and have thought about it consistently in the months since I read it.
I’m very happy to see this being discussed, and have enjoyed reading others’ answers.
Upon reflection, I seem to have a few different motivations: this was a surprise to me, as I expected to find a single overarching one.
a) Imagining another person’s experience, leading to imagining what it is like to experience some particular suffering that I can see they are experiencing. Imagining “what it is like” involves focusing on details of the experience and rejecting generalities (not “I have cancer” but “I am trying to reach down in the shower in the morning but can’t and the water is too hot”). Soon my train of thought goes to a more objective or detached place, and I think about how there is no real difference between me and the other person, that except blind circumstance there is no reason they should suffer when I do not.
There is an erasure of self involved. I imagine the core of my consciousness, the experiencing self, inhabiting the other person’s body and mind. From this one example I generalize; of course I should treat another person’s suffering the same as my own, because in the final analysis there is no difference between me and other people. That’s the altruism; desire for effectiveness is secondary and instrumental, not terminal.
b) Zooming out and imagining the whole of the world leads to imagining all the evil in the world. (Where “evil” is a broad term including suffering due to carelessness, due to misaligned incentives, due to lack of coordination, due to accident, etc.) It’s overwhelming; there’s a sense of perverse wonder. “The works of Moloch are as many and burn as cruelly as the white-hot stars.” This leads to a powerful feeling of being “fed-up” with the bad things. The desire for them to stop is like a very strong version of the desire to clean up an untidy room. It’s abstract and not connected to any one person’s suffering. This tends to be a stronger motivating force than a); if a) is empathy, this is anger.
Eliezer’s fiction is particularly good at conjuring this mind-state for me: for example, the “Make it stop” scene in http://yudkowsky.net/other/fiction/the-sword-of-good .
This mind-state seems more inherently connected to effectiveness than a), though effectiveness is still instrumental and not terminal. I want us to be making a strong/effective stand against the various bad things; when we’re not doing that, I am frustrated. I am less willing to tolerate “weakness”/ineffectiveness because I conceptualize us as in a struggle with high stakes.
This strikes me as incredibly good advice.
Just wanted to say I thought this post was great and really appreciate you writing it! I have a hard-to-feed hunger to know what the real situation with nuclear weapons is like, and this is one of the only things to touch it in the past few years. Any other resources you’d recommend?
I’m surprised and heartened to hear some evidence against the “Petrov singlehandedly saved the world” narrative. Is there somewhere I can learn about the other nuclear ‘close calls’ described in the book? (should I just read the book?)
Thanks for the response. That theory seems interesting and reasonable, but to my mind it doesn’t constitute strong evidence for the claim. The claim is about a very complex system (international politics) and requires a huge weight of evidence.
I think we may be starting from different positions: if I imagine believing that the U.S. military is basically a force for good in the world, what you’re saying sounds more intuitively appearing. However, I do not believe (nor disbelieve) this.
Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.
For example, “True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success—it would just partially counterbalance them.” My intuition says the opposite of this. I don’t think it’s at all clear (whether increasing the capability of the U.S. military is a good or bad thing).
I agree that object-level progress is to be preferred over meta-level progress on methodology.
I gave this post a strong upvote. It articulated something which I feel but have not articulated myself. Thank you for the clarity of writing which is on display here.
That said, I have some reservations which I would be interested in your thoughts on. When we argue about whether something is an ideology or not, we are assuming that the word “ideology” is applied to some things and not others, and that whether or not it is applied tells us useful things about the things it is applied to.
I am convinced that on the spectrum of movements, we should put effective altruism closer to libertarianism and feminism than the article you’re responding to would indicate. But what is on the other end of this spectrum? Is there a movement/”ism” you can point to that you’d say we should put on the other side of where we’ve put EA -- **less** ideological than it?
I wish I could triple-upvote this post.
You can! :P. Click-and-hold for “strong upvote.”
Therefore, we ought to prioritize interventions that improve the wisdom, capability, and coordination of future actors.
If we operate under the “ethical precautionary principle” you laid out in the previous post (always behave as if there was another crucial consideration yet to discover), how do we do this? We might think that some intervention will increase the wisdom of future actors, based on our best analysis of the situation. But we fear a lurking crucial consideration that will someday pounce and reveal that actually the intervention did nothing, or did the opposite.
In other words, don’t we need to be *somewhat* clueful already in order to bootstrap our way into more cluefulness?
Thank you for this series — I this is is an enormously important consideration when trying to do good, and I wish it were talked about more.
I am rereading this, and find myself nodding along vigorously to this paragraph:
I think this implies operating under an ethical precautionary principle: acting as if there were always an unknown crucial consideration that would strongly affect our decision-making, if only we knew it (i.e. always acting as if we are in the “no we can’t become clueful enough” category).
But not the following one:
Does always following this precautionary principle imply analysis paralysis, such that we never take any action at all? I don’t think so. We find ourselves in the middle of a process that’s underway, and devoting all of our resources to analysis & contemplation is itself a decision (“If you choose not to decide, you still have made a choice”).
Perhaps we indeed should move towards “analysis paralysis”, and reject actions that we do not have a very high level of certainty in the long-term effects of. Given the maxim that we should always act as if we are in the “no we can’t become clueful enough” category, this approach would reject actions that we anticipate to have large long-term effects (e.g. radically changing government policy, founding a company that becomes very large). But it’s not clear to me that it would reject all actions. Intuitively, P(cooking myself this fried egg will have large long-term effects) is low.
We can ask ourselves whether we are always in the position of the physician treating baby Hitler: every day when we go into work, we face many seemingly inconsequential decisions that are actually very consequential. i.e. P(cooking myself this fried egg will have large long-term effects) is actually high. But this doesn’t seem self-evident.
In other words, it might be tractable to minimize the number of very consequential decisions that the world makes, and this might be a way out of extreme consequentialist cluelessness. For example, imagine a world made up of many populated islands, where overseas travel is impossible and so the islands are causally separated. In such a world, the possible effects of any one action end at the island it started at, so therefore the consequences of any one action are capped in a way they are not in this world.
It seems to me that this approach would imply an EA that looks very different than the current one (and recommendations that look different than the ones you make in the next post). But it may also be a sub-consideration of the general considerations you lay out in your next post. What do you think?