Thank you for this series — I this is is an enormously important consideration when trying to do good, and I wish it were talked about more.
I am rereading this, and find myself nodding along vigorously to this paragraph:
I think this implies operating under an ethical precautionary principle: acting as if there were always an unknown crucial consideration that would strongly affect our decision-making, if only we knew it (i.e. always acting as if we are in the “no we can’t become clueful enough” category).
But not the following one:
Does always following this precautionary principle imply analysis paralysis, such that we never take any action at all? I don’t think so. We find ourselves in the middle of a process that’s underway, and devoting all of our resources to analysis & contemplation is itself a decision (“If you choose not to decide, you still have made a choice”).
Perhaps we indeed should move towards “analysis paralysis”, and reject actions that we do not have a very high level of certainty in the long-term effects of. Given the maxim that we should always act as if we are in the “no we can’t become clueful enough” category, this approach would reject actions that we anticipate to have large long-term effects (e.g. radically changing government policy, founding a company that becomes very large). But it’s not clear to me that it would reject all actions. Intuitively, P(cooking myself this fried egg will have large long-term effects) is low.
We can ask ourselves whether we are always in the position of the physician treating baby Hitler: every day when we go into work, we face many seemingly inconsequential decisions that are actually very consequential. i.e. P(cooking myself this fried egg will have large long-term effects) is actually high. But this doesn’t seem self-evident.
In other words, it might be tractable to minimize the number of very consequential decisions that the world makes, and this might be a way out of extreme consequentialist cluelessness. For example, imagine a world made up of many populated islands, where overseas travel is impossible and so the islands are causally separated. In such a world, the possible effects of any one action end at the island it started at, so therefore the consequences of any one action are capped in a way they are not in this world.
It seems to me that this approach would imply an EA that looks very different than the current one (and recommendations that look different than the ones you make in the next post). But it may also be a sub-consideration of the general considerations you lay out in your next post. What do you think?
Thank you for this series — I this is is an enormously important consideration when trying to do good, and I wish it were talked about more.
I am rereading this, and find myself nodding along vigorously to this paragraph:
But not the following one:
Perhaps we indeed should move towards “analysis paralysis”, and reject actions that we do not have a very high level of certainty in the long-term effects of. Given the maxim that we should always act as if we are in the “no we can’t become clueful enough” category, this approach would reject actions that we anticipate to have large long-term effects (e.g. radically changing government policy, founding a company that becomes very large). But it’s not clear to me that it would reject all actions. Intuitively, P(cooking myself this fried egg will have large long-term effects) is low.
We can ask ourselves whether we are always in the position of the physician treating baby Hitler: every day when we go into work, we face many seemingly inconsequential decisions that are actually very consequential. i.e. P(cooking myself this fried egg will have large long-term effects) is actually high. But this doesn’t seem self-evident.
In other words, it might be tractable to minimize the number of very consequential decisions that the world makes, and this might be a way out of extreme consequentialist cluelessness. For example, imagine a world made up of many populated islands, where overseas travel is impossible and so the islands are causally separated. In such a world, the possible effects of any one action end at the island it started at, so therefore the consequences of any one action are capped in a way they are not in this world.
It seems to me that this approach would imply an EA that looks very different than the current one (and recommendations that look different than the ones you make in the next post). But it may also be a sub-consideration of the general considerations you lay out in your next post. What do you think?