The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.
The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.
(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)
This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just like it applies to “let’s commit atrocities in pursuit of a brighter tomorrow in 2 years”. Literally any nice thing somebody says you could get would “justify atrocities”, in exactly the same way, if you forgot this rule. If you admit the existence of thousands of American schoolchildren getting suboptimally nutritious lunches, it could, oh no, justify abducting and torturing businessmen into using their ATM cards so you could get more money for the schoolchildren. Obviously then those children must not exist, or maybe they don’t have qualia so their suffering won’t be important, because if they existed and mattered that could justify atrocities, couldn’t it?
There is nothing special about longtermism compared to any other big desideratum in this regard. It is 100% unjustified special attention because people don’t like the desideratum itself. The same way that people ask “How can we spend money on AI safety when children are starving now?” but their mind doesn’t make the same leap about “How can we spend money on fighting global warming when children are starving now?” or say “Hey maybe we should critique total spending on lipstick advertising before we critique spending on rockets.”
As always, transhumanism done correctly is just humanism.
Agreed, and that’s a very good response to a position that one of the sides I critiqued has presented. But despite this and other reasons to reject their positions, I don’t think the reverse theoretical claim that we should focus resources exclusively on longtermism is a reasonable one to hold, even while accepting the deontological taboo and dismissing those overwrought supposed fears.
There is nothing special about longtermism compared to any other big desideratum in this regard.
I’m not sure this is the case. E.g. Steven Pinker in Better Angels makes the case that utopian movements systematically tend to commit atrocities because this all-important end goal justifies anyting in the medium term. I haven’t rigorously examined this argument and think it would be valuable for someone to do so, but much of longtermism in the EA community, especially of the strong variety, is based on something like utopia.
One reason why you might intuitively think there would be a relationship is that shorter-term impacts are typically somewhat more bounded; e.g. if thousands of American schoolchildren are getting suboptimal lunches, this obviously doesn’t justify torturing hundreds of thousands of people. With the strong longtermist claims it’s much less clear that there’s any sort of upper bound, so to draw a firm line against atrocities you end up looking to somewhat more convoluted reasoning (e.g. some notion of deontological restraint that isn’t completely absolute but yet can withstand astronomical consequences, or a sketchy and loose notion that atrocities have an instrumental downside).
There’s nothing convoluted about it! We just observe that historical experience shows that the supposed benefits never actually appear, leaving just the atrocity! That’s it! That’s the actual reason you know the real result would be net bad and therefore you need to find a reason to argue against it! If historically it worked great and exactly as promised every time, you would have different heuristics about it now!
The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.
The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.
(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)
This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just like it applies to “let’s commit atrocities in pursuit of a brighter tomorrow in 2 years”. Literally any nice thing somebody says you could get would “justify atrocities”, in exactly the same way, if you forgot this rule. If you admit the existence of thousands of American schoolchildren getting suboptimally nutritious lunches, it could, oh no, justify abducting and torturing businessmen into using their ATM cards so you could get more money for the schoolchildren. Obviously then those children must not exist, or maybe they don’t have qualia so their suffering won’t be important, because if they existed and mattered that could justify atrocities, couldn’t it?
There is nothing special about longtermism compared to any other big desideratum in this regard. It is 100% unjustified special attention because people don’t like the desideratum itself. The same way that people ask “How can we spend money on AI safety when children are starving now?” but their mind doesn’t make the same leap about “How can we spend money on fighting global warming when children are starving now?” or say “Hey maybe we should critique total spending on lipstick advertising before we critique spending on rockets.”
As always, transhumanism done correctly is just humanism.
Agreed, and that’s a very good response to a position that one of the sides I critiqued has presented. But despite this and other reasons to reject their positions, I don’t think the reverse theoretical claim that we should focus resources exclusively on longtermism is a reasonable one to hold, even while accepting the deontological taboo and dismissing those overwrought supposed fears.
I’m not sure this is the case. E.g. Steven Pinker in Better Angels makes the case that utopian movements systematically tend to commit atrocities because this all-important end goal justifies anyting in the medium term. I haven’t rigorously examined this argument and think it would be valuable for someone to do so, but much of longtermism in the EA community, especially of the strong variety, is based on something like utopia.
One reason why you might intuitively think there would be a relationship is that shorter-term impacts are typically somewhat more bounded; e.g. if thousands of American schoolchildren are getting suboptimal lunches, this obviously doesn’t justify torturing hundreds of thousands of people. With the strong longtermist claims it’s much less clear that there’s any sort of upper bound, so to draw a firm line against atrocities you end up looking to somewhat more convoluted reasoning (e.g. some notion of deontological restraint that isn’t completely absolute but yet can withstand astronomical consequences, or a sketchy and loose notion that atrocities have an instrumental downside).
There’s nothing convoluted about it! We just observe that historical experience shows that the supposed benefits never actually appear, leaving just the atrocity! That’s it! That’s the actual reason you know the real result would be net bad and therefore you need to find a reason to argue against it! If historically it worked great and exactly as promised every time, you would have different heuristics about it now!