Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thank you for this series — I this is is an enormously important consideration when trying to do good, and I wish it were talked about more.
I am rereading this, and find myself nodding along vigorously to this paragraph:
But not the following one:
Perhaps we indeed should move towards “analysis paralysis”, and reject actions that we do not have a very high level of certainty in the long-term effects of. Given the maxim that we should always act as if we are in the “no we can’t become clueful enough” category, this approach would reject actions that we anticipate to have large long-term effects (e.g. radically changing government policy, founding a company that becomes very large). But it’s not clear to me that it would reject all actions. Intuitively, P(cooking myself this fried egg will have large long-term effects) is low.
We can ask ourselves whether we are always in the position of the physician treating baby Hitler: every day when we go into work, we face many seemingly inconsequential decisions that are actually very consequential. i.e. P(cooking myself this fried egg will have large long-term effects) is actually high. But this doesn’t seem self-evident.
In other words, it might be tractable to minimize the number of very consequential decisions that the world makes, and this might be a way out of extreme consequentialist cluelessness. For example, imagine a world made up of many populated islands, where overseas travel is impossible and so the islands are causally separated. In such a world, the possible effects of any one action end at the island it started at, so therefore the consequences of any one action are capped in a way they are not in this world.
It seems to me that this approach would imply an EA that looks very different than the current one (and recommendations that look different than the ones you make in the next post). But it may also be a sub-consideration of the general considerations you lay out in your next post. What do you think?
Bostrom defines a “crucial consideration” as one that would overturn a conclusion or reveal the need for a major change of direction. By this definition, something may or may not be a “crucial consideration” depending on our current set of conclusions and our current direction. The definition sneaks in a connotation that important new insights will tend to reveal the need for a major change of direction. But it’s also possible that important new insights will reaffirm our current direction. See conservation of expected evidence.
Regarding the precautionary principle, consider a reversibility test: Suppose there is some parameter of the world p which is gradually increasing, and you have the opportunity to interfere and stop this increase for no cost. By the precautionary principle, you should not interfere. Now suppose p is currently static, and you have the opportunity to interfere and trigger a gradual increase for no cost. Again, by the precautionary principle, you should not interfere.
For someone like me, who does not believe in the act/omission distinction and believes in fighting status quo bias, this seems a little silly. I think the best arguments for a policy of non-interference in both scenarios are:
In the real world, actions typically have costs.
It’s possible that our interference isn’t reversible, and by thinking more, we can better determine whether interference is the correct course of action. In other words, value of information is high. But this argument depends on cluelessness being tractable! If our current guess is as good as our guess will ever be, we might as well act on it.
I’m sympathetic to the idea that value of information is high, and I think cluelessness is tractable. I support EA groups like the Future of Humanity Institute which are trying to work out the best course of action. But at a certain point, the low-hanging information fruit will get picked, and then it’s likely time to act. If we aren’t going to take action under any circumstances, gathering information is a waste of time.