Yeah, I didnât read your other posts (including Proving Too Much), so itâs possible they counter some of my points, clarify your argument more, or the like.
(The reason I didnât read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskillâs paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialismâand/âor maybe placing less moral weight on future generations. It doesnât seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quoteâs arguments for its conclusion seem to just be that we have a stronger âdutyâ to the present, and that âwe should never attempt to balance anybodyâs misery against somebody elseâs happiness.â
(Of course, Iâm not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And âwe should never attempt to balance anybodyâs misery against somebody elseâs happinessâ is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWellâs work, or
Basically irrelevant, if it just means we canât âactively causeâ misery in someone (as opposed to just ânot helpingâ) in order to help others
I think that longtermism doesnât do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/âare between you and most longtermists?
Hey Vaden!
Yeah, I didnât read your other posts (including Proving Too Much), so itâs possible they counter some of my points, clarify your argument more, or the like.
(The reason I didnât read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskillâs paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialismâand/âor maybe placing less moral weight on future generations. It doesnât seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quoteâs arguments for its conclusion seem to just be that we have a stronger âdutyâ to the present, and that âwe should never attempt to balance anybodyâs misery against somebody elseâs happiness.â
(Of course, Iâm not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And âwe should never attempt to balance anybodyâs misery against somebody elseâs happinessâ is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWellâs work, or
Basically irrelevant, if it just means we canât âactively causeâ misery in someone (as opposed to just ânot helpingâ) in order to help others
I think that longtermism doesnât do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/âare between you and most longtermists?