From my perspective as an evolutionary psychologist, I wouldn’t expect us to have reliable or coherent intuitions about utility aggregation for any groups larger than about 150 people, for any time-spans beyond two generations, or for any non-human sentient beings.
This is why consequentialist thought experiments like this so often strike me as demanding the impossible of human moral intuitions—like expecting us to be able to reconcile our ‘intuitive physics’ concept of ‘impetus’ with current models of quantum gravity.
Whenever we take our moral intuitions beyond their ‘environment of evolutionary adaptedness’ (EEA), there’s no reason to expect they can be reconciled with serious consequentialist analysis. And even within the EEA, there’s no reason to expect out moral intuitions will be utilitarian rather than selfish + nepotistic + in-groupish + a bit of virtue-signaling.
From my perspective as an evolutionary psychologist, I wouldn’t expect us to have reliable or coherent intuitions about utility aggregation for any groups larger than about 150 people, for any time-spans beyond two generations, or for any non-human sentient beings.
This is why consequentialist thought experiments like this so often strike me as demanding the impossible of human moral intuitions—like expecting us to be able to reconcile our ‘intuitive physics’ concept of ‘impetus’ with current models of quantum gravity.
Whenever we take our moral intuitions beyond their ‘environment of evolutionary adaptedness’ (EEA), there’s no reason to expect they can be reconciled with serious consequentialist analysis. And even within the EEA, there’s no reason to expect out moral intuitions will be utilitarian rather than selfish + nepotistic + in-groupish + a bit of virtue-signaling.