Counterfactual reasoning involves scenarios that will occur if an agent chooses a certain action, or that would have occurred if an agent had chosen an action they did not. For instance, we can consider a counterfactual scenario in which the effective altruism community was called ‘effective giving’ rather than effective altruism.
When we rank actions, we generally want to consider not just how good an action is, but how good it is relative to the alternatives. This is implicitly assumed by the framework of idealized decision-making, but it is useful to state it explicitly.
One related heuristic is replaceability: it may be the case, for instance, that if you do not take a certain action, then someone else will take it instead.
Unfortunately, counterfactuals are often difficult to evaluate. Even after an action is taken, there will in many cases remain substantial uncertainty about what would have happened if one had acted otherwise. This means that we will often be unsure about whether we have acted in the best possible way.
Further reading
Ord, Toby (2014) Drones, counterfactuals, and equilibria: Challenges in evaluating new military technologies, Future of Humanity Institute, University of Oxford.
Sempere, Nuño (2019) Shapley values: Better than counterfactuals, Effective Altruism Forum, October 10.
Maybe this entry should also (briefly?) discuss Shapley values?