This strikes me as incredibly good advice.
Just wanted to say I thought this post was great and really appreciate you writing it! I have a hard-to-feed hunger to know what the real situation with nuclear weapons is like, and this is one of the only things to touch it in the past few years. Any other resources you’d recommend?
I’m surprised and heartened to hear some evidence against the “Petrov singlehandedly saved the world” narrative. Is there somewhere I can learn about the other nuclear ‘close calls’ described in the book? (should I just read the book?)
Thanks for the response. That theory seems interesting and reasonable, but to my mind it doesn’t constitute strong evidence for the claim. The claim is about a very complex system (international politics) and requires a huge weight of evidence.
I think we may be starting from different positions: if I imagine believing that the U.S. military is basically a force for good in the world, what you’re saying sounds more intuitively appearing. However, I do not believe (nor disbelieve) this.
Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.
For example, “True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success—it would just partially counterbalance them.” My intuition says the opposite of this. I don’t think it’s at all clear (whether increasing the capability of the U.S. military is a good or bad thing).
I agree that object-level progress is to be preferred over meta-level progress on methodology.
I gave this post a strong upvote. It articulated something which I feel but have not articulated myself. Thank you for the clarity of writing which is on display here.
That said, I have some reservations which I would be interested in your thoughts on. When we argue about whether something is an ideology or not, we are assuming that the word “ideology” is applied to some things and not others, and that whether or not it is applied tells us useful things about the things it is applied to.
I am convinced that on the spectrum of movements, we should put effective altruism closer to libertarianism and feminism than the article you’re responding to would indicate. But what is on the other end of this spectrum? Is there a movement/”ism” you can point to that you’d say we should put on the other side of where we’ve put EA -- **less** ideological than it?
I wish I could triple-upvote this post.
You can! :P. Click-and-hold for “strong upvote.”
Therefore, we ought to prioritize interventions that improve the wisdom, capability, and coordination of future actors.
If we operate under the “ethical precautionary principle” you laid out in the previous post (always behave as if there was another crucial consideration yet to discover), how do we do this? We might think that some intervention will increase the wisdom of future actors, based on our best analysis of the situation. But we fear a lurking crucial consideration that will someday pounce and reveal that actually the intervention did nothing, or did the opposite.
In other words, don’t we need to be *somewhat* clueful already in order to bootstrap our way into more cluefulness?
Thank you for this series — I this is is an enormously important consideration when trying to do good, and I wish it were talked about more.
I am rereading this, and find myself nodding along vigorously to this paragraph:
I think this implies operating under an ethical precautionary principle: acting as if there were always an unknown crucial consideration that would strongly affect our decision-making, if only we knew it (i.e. always acting as if we are in the “no we can’t become clueful enough” category).
But not the following one:
Does always following this precautionary principle imply analysis paralysis, such that we never take any action at all? I don’t think so. We find ourselves in the middle of a process that’s underway, and devoting all of our resources to analysis & contemplation is itself a decision (“If you choose not to decide, you still have made a choice”).
Perhaps we indeed should move towards “analysis paralysis”, and reject actions that we do not have a very high level of certainty in the long-term effects of. Given the maxim that we should always act as if we are in the “no we can’t become clueful enough” category, this approach would reject actions that we anticipate to have large long-term effects (e.g. radically changing government policy, founding a company that becomes very large). But it’s not clear to me that it would reject all actions. Intuitively, P(cooking myself this fried egg will have large long-term effects) is low.
We can ask ourselves whether we are always in the position of the physician treating baby Hitler: every day when we go into work, we face many seemingly inconsequential decisions that are actually very consequential. i.e. P(cooking myself this fried egg will have large long-term effects) is actually high. But this doesn’t seem self-evident.
In other words, it might be tractable to minimize the number of very consequential decisions that the world makes, and this might be a way out of extreme consequentialist cluelessness. For example, imagine a world made up of many populated islands, where overseas travel is impossible and so the islands are causally separated. In such a world, the possible effects of any one action end at the island it started at, so therefore the consequences of any one action are capped in a way they are not in this world.
It seems to me that this approach would imply an EA that looks very different than the current one (and recommendations that look different than the ones you make in the next post). But it may also be a sub-consideration of the general considerations you lay out in your next post. What do you think?
Have you heard of Harry Potter and the Methods of Rationality (http://www.hpmor.com/) and/or http://unsongbook.com ? I think they serve some of this role for the community already.
It’s interesting they are both long-form web fiction; we don’t have EA tv shows or rock bands that I know of.
Thanks for posting about this! The experiences I’ve had with art feel like a big part of what motivates my altruism.
One of the ways art can encourage altruism is by rendering real the life of another person, making you experience their suffering or joy as your own. Many pieces of art have this effect on me, too many to name—indeed I think of it as a defining quality of good art.
Another way art can encourage altruism is by taking a zoomed-out perspective and engaging with moral ideals in the abstract. This you might call “humanistic”. I’ve listed mostly these below, as art of the other type is too numerous to name.
- The Dispossessed by Ursula K. LeGuin is very meaningful to me as a vision of what a society where we cared “sufficiently” about others might look like.
- All Kurt Vonnegut, a very humanistic writer. God Bless You, Mr. Rosewater is explicitly about a philosophically-minded billionaire who decides to give his wealth away to the poor, and the consequences of that decision.
- George Saunders, another very humanistic writer. Tenth of December is great. https://www.newyorker.com/magazine/2012/10/15/the-semplica-girl-diaries is a great one of his about the banality of evil.
- https://www.newyorker.com/magazine/2008/08/11/trouble-poem-matthew-dickman (Content warning: suicide)
- https://en.wikipedia.org/wiki/In_Jackson_Heights (a long, quiet, slice-of-life documentary that jumps between people)
- https://en.wikipedia.org/wiki/Death_by_Hanging (the Japanese police botch an execution, causing the criminal to lose all his memories of the crime; the police, panicking, try to jog his memory so they can execute him like they’re supposed to)
You write “I don’t know how much of our time this is worth”, but to me it seems clear that this is worth a *lot* of our time.
I have a model of human motivation. One aspect of my model is that it is very hard for most people (myself very included) to remain motivated to do something that does not get them any social rewards from the people around them.
Others on this forum have written about “values drift” (https://forum.effectivealtruism.org/posts/eRo5A7scsxdArxMCt/concrete-ways-to-reduce-risks-of-value-drift) and the role community plays in it.
I like the idea of using food scares as a proxy! Very cool.
It sounds like you are saying that knowing “how will kg of chicken sold change given change in price” will let you answer “how will kg of chicken sold change given me not buying chicken.” I don’t see quite how to do this, could you give me a pointer? (for concreteness, what does the paper’s estimate of elasticity of poultry at 0.68 mean for “kg of chicken sold given I don’t buy the chicken”)
Perhaps more importantly, it sounds like you might disagree that one person abstaining from eating chicken has a meaningful impact on the number of chickens raised + killed. If so I’m quite interested, as this is something I have become convinced against by sources like https://reducing-suffering.org/does-vegetarianism-make-a-difference/.
My current model is that if I buy the meat of one chicken at a supermarket, that *in expectation* causes about one chicken to be raised + killed.
Thanks for finding this paper. But I think they are answering the question “If I change price, what happens to demand?”, while I am asking “If demand drops (me not buying any chicken), what happens to total quantity sold?”
It doesn’t seem consistent to me to say “I’m too small of an actor to affect price, but not to affect quantity sold.”
Thank you for the small education in economics of consideration 2, though. I’ve read the Wikipedia article and found it helpful, although I have further questions. Are there goods that economists think do work like what my friend is describing? Is there a name for goods like this?
Thanks, Samara. I found the paper you’re talking about here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2804646/pdf/216.pdf
I’m out of my depth here, but it looks like the paper is answering the question: “if the price of chicken changes from $X/kg to $(X + Y)/kg, how will kg of chicken sold change?” While the question I’m asking is “if I don’t buy chicken, how will kg of chicken sold change?”.
This is how it feels to me to be mentally fatigued.
I’ve done management (of software engineers in a startup) and decided to move away from it for now, but can see a future in which I do more of it.
I am quite interested but am in Boston. Do you know of similar events in my area or in the US?
I love “What sets us against one another...” and feel this is the best expression of an idea which is powerful to me. I had not found such a short expression of it before. Thank you for it.