Sad there isn’t much engagement here, I’ll try my best.
willfully endangers
This seems… not very backed-up with evidence. As in, I haven’t seen William MacAskill, Nick Bostrom, Eliezer Yudkowsky, Nick Beckstead, Toby Ord, or… well, any longtermist “thought leaders” advocate this.
I think the paper in the linked tweet is making the example that trade-offs matter, not that we need to go out and make the largest and most uncomfortable ones (which would be abhorrent and wildly unnecessary).
Consider: Through historical privilege, atrocities, etc., people in richer countries tend to be more powerful. Therefore, in the long term, we’d generally expect them to have more influence on the future. Therefore, it makes sense to get them to use their power to make the future good’n’equitable, which may involve saving their lives.
“Endangers” here suggests “callously disregard”. “Willfully endangers” suggests “kill or otherwise throw under the bus”. Neither is actually being suggested as far as I can tell.
To put it another way: Abraham Lincoln was more powerful than any single person held in slavery in the Confederate South. If an abolitionist Secret Service officer could only save Lincoln or a slave, it makes more sense to save Lincoln. BUT that does not mean they want to kill the slave.
The closest analogy in longtermism would be something like “they’re intentionally ignoring the poor brown people who will be killed by runaway climate change, in favor of sci-fi nerdy doom scenarios”. But this ignores a few key factors:
If the sci-fi doom scenario is a real threat, the trade-off makes sense. That does not make it easy or comfortable, and it really shouldn’t.
If a longtermist focuses their career on a “sci-fi” cause, that does not mean they therefore think all of society’s resources/focus should go into that cause. Society, as a whole, has lots of resources.
Similarly, real existing longtermist billionaire donors don’t even focus all their (or “their”) wealth on one cause area.
Longtermists are still concerned about climate change, [even in the scenarios where
(https://80000hours.org/problem-profiles/climate-change/#indirect-risks) it doesn’t cause human extinction by itself! And even if there was no danger of climate change causing or aiding extinction… it’s still bad! Longtermists still agree that it’s bad! Longtermists care about the future, even if that future is (relatively) “near-term”.
Without using… quantitative reasoning.
Up to now I’ve used terms like “more” and “trade-off”, which I hope has been qualitative enough. But also… quantitative reasoning is good, actually. As in, if you don’t use it, you are liable to make decisions that save fewer lives and allow more suffering to happen.
Again, just because a trade-off exists (or might exist), does not mean it’s the “best deal” to take. Like, sure, somebody could construct a thought experiment where they genocide poor brown people, and then ??? happens, and then the long-term future is saved. But real longtermist organizations don’t appear to be doing that. They mostly do (theoretical! cheap!) research, publishing ideas, and trying to persuade people of the ideas (the horror!). This is a really good deal! Again, society has enough resources for this to happen without just neglecting poor brown people.
Longtermism is closely linked to “total utilitarianism,” which scholars have noted seems to imply that wealth should actively be transferred from the poor to the rich.
Maybe Cowen (noted in the linked thread) is saying this. If so, I agree with Torres that that seems pretty dumb, so I won’t defend it. The problem is the phrase “seems to imply”. Total utilitarianism counts all the pleasure/pain experienced by everyone, which includes taking into account the diminishing happiness-returns of giving one person more and more resources).
Sure, there’s the “utility monster” objection, where you have some being who keeps getting linearly and not-diminishingly happy as we shovel resources into it. But that being seems absent from real life. And it’s still better to have the utility monster and everyone else happy, unless you think the utility monster also derives more happiness from additional resources, which is even less realistic than the above idea. (Also, you can make a “utility monster” for average utilitarianism too, since they push up the average. You can do it with lots of theories, potentially...). Out of the classic “bullets to bite” for utilitarianism, this is one of the easier ones to swallow.
Cowen might’ve meant something like “billionaires are occasionally longtermist, therefore the government should tax them less / give them more money, so they can donate more”. This is dumb and I won’t defend it. Note, however, that if billionaires did not exist, longtermists would likely be trying to persuade somebody to donate resources to longtermist causes, even if “somebody” means “a majority of people” or “a central government” or something. As long as any resources exist, longtermism would recommend they be used for long-term good.
This discussion has also spurred me to write a longer post about common criticisms of longtermism, which I may or may not finish and release. (Disclosure: I am a fairly-hard longtermist, complete with AI safety).
EDIT 24Nov2022: I’ve been reading this article about Torres. While I think my above points stand even under the least-charitable readings of longtermist arguments, I’ve shifted my credence more towards “the people quoted don’t even share those least-charitably-described ideas”. Also, I’d be interested to read your reply to my writing above.
Sad there isn’t much engagement here, I’ll try my best.
This seems… not very backed-up with evidence. As in, I haven’t seen William MacAskill, Nick Bostrom, Eliezer Yudkowsky, Nick Beckstead, Toby Ord, or… well, any longtermist “thought leaders” advocate this.
I think the paper in the linked tweet is making the example that trade-offs matter, not that we need to go out and make the largest and most uncomfortable ones (which would be abhorrent and wildly unnecessary).
Consider: Through historical privilege, atrocities, etc., people in richer countries tend to be more powerful. Therefore, in the long term, we’d generally expect them to have more influence on the future. Therefore, it makes sense to get them to use their power to make the future good’n’equitable, which may involve saving their lives.
“Endangers” here suggests “callously disregard”. “Willfully endangers” suggests “kill or otherwise throw under the bus”. Neither is actually being suggested as far as I can tell.
To put it another way: Abraham Lincoln was more powerful than any single person held in slavery in the Confederate South. If an abolitionist Secret Service officer could only save Lincoln or a slave, it makes more sense to save Lincoln. BUT that does not mean they want to kill the slave.
The closest analogy in longtermism would be something like “they’re intentionally ignoring the poor brown people who will be killed by runaway climate change, in favor of sci-fi nerdy doom scenarios”. But this ignores a few key factors:
If the sci-fi doom scenario is a real threat, the trade-off makes sense. That does not make it easy or comfortable, and it really shouldn’t.
If a longtermist focuses their career on a “sci-fi” cause, that does not mean they therefore think all of society’s resources/focus should go into that cause. Society, as a whole, has lots of resources.
Similarly, real existing longtermist billionaire donors don’t even focus all their (or “their”) wealth on one cause area.
Longtermists are still concerned about climate change, [even in the scenarios where (https://80000hours.org/problem-profiles/climate-change/#indirect-risks) it doesn’t cause human extinction by itself! And even if there was no danger of climate change causing or aiding extinction… it’s still bad! Longtermists still agree that it’s bad! Longtermists care about the future, even if that future is (relatively) “near-term”.
Up to now I’ve used terms like “more” and “trade-off”, which I hope has been qualitative enough. But also… quantitative reasoning is good, actually. As in, if you don’t use it, you are liable to make decisions that save fewer lives and allow more suffering to happen.
Again, just because a trade-off exists (or might exist), does not mean it’s the “best deal” to take. Like, sure, somebody could construct a thought experiment where they genocide poor brown people, and then ??? happens, and then the long-term future is saved. But real longtermist organizations don’t appear to be doing that. They mostly do (theoretical! cheap!) research, publishing ideas, and trying to persuade people of the ideas (the horror!). This is a really good deal! Again, society has enough resources for this to happen without just neglecting poor brown people.
This final part is more of a nitpick. From the linked Twitter thread:
Maybe Cowen (noted in the linked thread) is saying this. If so, I agree with Torres that that seems pretty dumb, so I won’t defend it. The problem is the phrase “seems to imply”. Total utilitarianism counts all the pleasure/pain experienced by everyone, which includes taking into account the diminishing happiness-returns of giving one person more and more resources).
Sure, there’s the “utility monster” objection, where you have some being who keeps getting linearly and not-diminishingly happy as we shovel resources into it. But that being seems absent from real life. And it’s still better to have the utility monster and everyone else happy, unless you think the utility monster also derives more happiness from additional resources, which is even less realistic than the above idea. (Also, you can make a “utility monster” for average utilitarianism too, since they push up the average. You can do it with lots of theories, potentially...). Out of the classic “bullets to bite” for utilitarianism, this is one of the easier ones to swallow.
Cowen might’ve meant something like “billionaires are occasionally longtermist, therefore the government should tax them less / give them more money, so they can donate more”. This is dumb and I won’t defend it. Note, however, that if billionaires did not exist, longtermists would likely be trying to persuade somebody to donate resources to longtermist causes, even if “somebody” means “a majority of people” or “a central government” or something. As long as any resources exist, longtermism would recommend they be used for long-term good.
This discussion has also spurred me to write a longer post about common criticisms of longtermism, which I may or may not finish and release. (Disclosure: I am a fairly-hard longtermist, complete with AI safety).
EDIT 24Nov2022: I’ve been reading this article about Torres. While I think my above points stand even under the least-charitable readings of longtermist arguments, I’ve shifted my credence more towards “the people quoted don’t even share those least-charitably-described ideas”. Also, I’d be interested to read your reply to my writing above.