Hey! Can’t respond most of your points now unfortunately, but just a few quick things :)
(I’m working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the ‘inconsequential in the grand scheme of things’ one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness.
The “undefined” bit also “proves too much”; it basically says we can’t predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
Just wanted to flag that I responded to the ‘proving too much’ concern here: Proving Too Much
Yeah, I didn’t read your other posts (including Proving Too Much), so it’s possible they counter some of my points, clarify your argument more, or the like.
(The reason I didn’t read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill’s paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialism—and/or maybe placing less moral weight on future generations. It doesn’t seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quote’s arguments for its conclusion seem to just be that we have a stronger “duty” to the present, and that “we should never attempt to balance anybody’s misery against somebody else’s happiness.”
(Of course, I’m not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And “we should never attempt to balance anybody’s misery against somebody else’s happiness” is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWell’s work, or
Basically irrelevant, if it just means we can’t “actively cause” misery in someone (as opposed to just “not helping”) in order to help others
I think that longtermism doesn’t do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/are between you and most longtermists?
Hey! Can’t respond most of your points now unfortunately, but just a few quick things :)
(I’m working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the ‘inconsequential in the grand scheme of things’ one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness.
Just wanted to flag that I responded to the ‘proving too much’ concern here: Proving Too Much
Hey Vaden!
Yeah, I didn’t read your other posts (including Proving Too Much), so it’s possible they counter some of my points, clarify your argument more, or the like.
(The reason I didn’t read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill’s paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialism—and/or maybe placing less moral weight on future generations. It doesn’t seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quote’s arguments for its conclusion seem to just be that we have a stronger “duty” to the present, and that “we should never attempt to balance anybody’s misery against somebody else’s happiness.”
(Of course, I’m not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And “we should never attempt to balance anybody’s misery against somebody else’s happiness” is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWell’s work, or
Basically irrelevant, if it just means we can’t “actively cause” misery in someone (as opposed to just “not helping”) in order to help others
I think that longtermism doesn’t do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/are between you and most longtermists?