“If the basic idea of long-termism—giving future generations the same moral weight as our own—seems superficially uncontroversial, it needs to be seen in a longer-term philosophical context. Long-termism is a form of utilitarianism or consequentialism, the school of thought originally developed by Jeremy Bentham and John Stuart Mill.
The utilitarian premise that we should do whatever does the most good for the most people also sounds like common sense on the surface, but it has many well-understood problems. These have been pointed out over hundreds of years by philosophers from the opposing schools of deontological ethics, who believe that moral rules and duties can take precedence over consequentialist considerations, and virtue theorists, who assert that ethics is primarily about developing character. In other words, long-termism can be viewed as a particular position in the time-honored debate about inter-generational ethics.
The push to popularize long-termism is not an attempt to solve these long-standing intellectual debates, but to make an end run around it. Through attractive sloganeering, it attempts to establish consequentialist moral decision-making that prioritizes the welfare of future generations as the dominant ethical theory for our times.”
This strikes me as a very common class of confusion. I have seen many EAs say that what they hope for out of “What We Owe the Future” is that it will act as a sort of “Animal Liberation for future people”. You don’t see a ton of people saying something like “caring about animals seems nice and all, but you have to view this book in context. Secretly being pro-animal liberation is about being a utilitarian sentientist with an equal consideration of equal interests welfarist approach, that awards secondary rights like life based on personhood”. This would seem either like a blatant failure of reading comprehension, or a sort of ethical paranoia that can’t picture any reason someone would argue for an ethical position that didn’t come with their entire fundamental moral theory tacked on.
On the one hand I think pieces like this are making a more forgivable mistake, because the basic version of the premise just doesn’t look controversial enough to be what MacAskill actually is hoping for. Indeed I personally think the comparison isn’t fantastic, in that MacAskill probably hopes the book will have more influence on inspiring further action and discussion than on changing minds about the fundamental issue (which again is less controversial, and which he spends less time in the book on).
On the other hand, he has been at special pains to emphasize in his book, interviews, and secondary writings, that he is highly uncertain about first order moral views, and is specifically, only arguing for longtermism as a coalition around these broad issues and ways of making moral decisions on the margins. Someone like MacAskill who is specifically arguing for a period where we hold off from irreversible changes as long as possible in order to get these moral discussions right really doesn’t fit the bill or someone trying to “make an end run around” these issues.
“If the basic idea of long-termism—giving future generations the same moral weight as our own—seems superficially uncontroversial, it needs to be seen in a longer-term philosophical context. Long-termism is a form of utilitarianism or consequentialism, the school of thought originally developed by Jeremy Bentham and John Stuart Mill.
The utilitarian premise that we should do whatever does the most good for the most people also sounds like common sense on the surface, but it has many well-understood problems. These have been pointed out over hundreds of years by philosophers from the opposing schools of deontological ethics, who believe that moral rules and duties can take precedence over consequentialist considerations, and virtue theorists, who assert that ethics is primarily about developing character. In other words, long-termism can be viewed as a particular position in the time-honored debate about inter-generational ethics.
The push to popularize long-termism is not an attempt to solve these long-standing intellectual debates, but to make an end run around it. Through attractive sloganeering, it attempts to establish consequentialist moral decision-making that prioritizes the welfare of future generations as the dominant ethical theory for our times.”
This strikes me as a very common class of confusion. I have seen many EAs say that what they hope for out of “What We Owe the Future” is that it will act as a sort of “Animal Liberation for future people”. You don’t see a ton of people saying something like “caring about animals seems nice and all, but you have to view this book in context. Secretly being pro-animal liberation is about being a utilitarian sentientist with an equal consideration of equal interests welfarist approach, that awards secondary rights like life based on personhood”. This would seem either like a blatant failure of reading comprehension, or a sort of ethical paranoia that can’t picture any reason someone would argue for an ethical position that didn’t come with their entire fundamental moral theory tacked on.
On the one hand I think pieces like this are making a more forgivable mistake, because the basic version of the premise just doesn’t look controversial enough to be what MacAskill actually is hoping for. Indeed I personally think the comparison isn’t fantastic, in that MacAskill probably hopes the book will have more influence on inspiring further action and discussion than on changing minds about the fundamental issue (which again is less controversial, and which he spends less time in the book on).
On the other hand, he has been at special pains to emphasize in his book, interviews, and secondary writings, that he is highly uncertain about first order moral views, and is specifically, only arguing for longtermism as a coalition around these broad issues and ways of making moral decisions on the margins. Someone like MacAskill who is specifically arguing for a period where we hold off from irreversible changes as long as possible in order to get these moral discussions right really doesn’t fit the bill or someone trying to “make an end run around” these issues.