tl;dr: Yes, I think so, for both questions. I think my comments already did this, but that I didnât make it obvious whether and where this happened, so your question is a useful one.
I like that essay, and also this related Slate Star Codex essay. I also think this might be a generically useful question to ask in response to a post like the one Iâve made. (Though I also think thereâs value in epistemic spot checks, and that if you know there are a large number of faulty arguments in X but not whether those were the central arguments in X, thatâs still some evidence that the central arguments are faulty too.)
Your comment makes me realise that probably a better structure for this post wouldâve been to first summarise my understanding of the central point Masrani was making and Masraniâs key arguments for that, and then say why I disagree with parts of those key arguments, and then maybe also add other disagreements but flag that theyâre less central.
The main reason my post is structured as it is is basically just that I tried to relatively quickly adapt notes that I made while reading the post. But hereâs a quick attempt at something like that (from re-skimming Masraniâs post now, having originally read it over a month ago)...
---
Masrani writes:
In Section 2 the authors helpfully state the two assumptions which longtermism needs to get off the ground:
In expectation, the future is vast in size. In particular they assume the future will contain at least 1 quadrillion (1015) beings in expectation.
We should not be biased towards the present.
I think both of these assumptions are false, and in fact:
In expectation, the future is undefined.
We should absolutely be biased towards the present.
Weâll discuss both in turn after an introduction to expected values.
As noted in some of my comments:
The âundefinedâ bit involves talking a lot about infinities, but neither Greaves and MacAskillâs paper nor standard cases for longtermism rely on infinities
The âundefinedâ bit also âproves too muchâ; it basically says we canât predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
Greaves and MacAskill say we shouldnât have a pure rate of time preference. They donât say we should engage in no time discounting at all. And Masraniâs arguments for a bias towards the present are unrelated to the question of whether we should have a pure rate of time preference, so they donât actually counter the paperâs claims.
The paper also significantly misunderstands what strong longtermism and the paper actually implies (e.g., thinking that it definitely entails a focus on existential risk), which is a problem when attempting to argue against strong longtermism and the paper.
Iâm not sure whether this last bit should be considered part of refuting the main point, but it seems relevant?
---
(I should note again that I read the post over a month ago and just dipped in quickly to skim for a central point to refute, so itâs possible there were other central points I missed.)
I also expect that the post mentioned various other things that are related to better arguments against longtermism, e.g. the epistemic challenge to longtermism that Tarsneyâs paper discusses. But Iâm pretty sure I remember the post not adding to what had already been discussed on those points. (A post that just summarised those other arguments could be useful, but the post didnât set out to be that.)
Hey! Canât respond most of your points now unfortunately, but just a few quick things :)
(Iâm working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the âinconsequential in the grand scheme of thingsâ one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybodyâs misery against somebody elseâs happiness.
The âundefinedâ bit also âproves too muchâ; it basically says we canât predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
Just wanted to flag that I responded to the âproving too muchâ concern here: Proving Too Much
Yeah, I didnât read your other posts (including Proving Too Much), so itâs possible they counter some of my points, clarify your argument more, or the like.
(The reason I didnât read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskillâs paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialismâand/âor maybe placing less moral weight on future generations. It doesnât seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quoteâs arguments for its conclusion seem to just be that we have a stronger âdutyâ to the present, and that âwe should never attempt to balance anybodyâs misery against somebody elseâs happiness.â
(Of course, Iâm not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And âwe should never attempt to balance anybodyâs misery against somebody elseâs happinessâ is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWellâs work, or
Basically irrelevant, if it just means we canât âactively causeâ misery in someone (as opposed to just ânot helpingâ) in order to help others
I think that longtermism doesnât do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/âare between you and most longtermists?
tl;dr: Yes, I think so, for both questions. I think my comments already did this, but that I didnât make it obvious whether and where this happened, so your question is a useful one.
I like that essay, and also this related Slate Star Codex essay. I also think this might be a generically useful question to ask in response to a post like the one Iâve made. (Though I also think thereâs value in epistemic spot checks, and that if you know there are a large number of faulty arguments in X but not whether those were the central arguments in X, thatâs still some evidence that the central arguments are faulty too.)
Your comment makes me realise that probably a better structure for this post wouldâve been to first summarise my understanding of the central point Masrani was making and Masraniâs key arguments for that, and then say why I disagree with parts of those key arguments, and then maybe also add other disagreements but flag that theyâre less central.
The main reason my post is structured as it is is basically just that I tried to relatively quickly adapt notes that I made while reading the post. But hereâs a quick attempt at something like that (from re-skimming Masraniâs post now, having originally read it over a month ago)...
---
Masrani writes:
As noted in some of my comments:
The âundefinedâ bit involves talking a lot about infinities, but neither Greaves and MacAskillâs paper nor standard cases for longtermism rely on infinities
The âundefinedâ bit also âproves too muchâ; it basically says we canât predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
See this comment
Greaves and MacAskill say we shouldnât have a pure rate of time preference. They donât say we should engage in no time discounting at all. And Masraniâs arguments for a bias towards the present are unrelated to the question of whether we should have a pure rate of time preference, so they donât actually counter the paperâs claims.
The paper also significantly misunderstands what strong longtermism and the paper actually implies (e.g., thinking that it definitely entails a focus on existential risk), which is a problem when attempting to argue against strong longtermism and the paper.
Iâm not sure whether this last bit should be considered part of refuting the main point, but it seems relevant?
---
(I should note again that I read the post over a month ago and just dipped in quickly to skim for a central point to refute, so itâs possible there were other central points I missed.)
I also expect that the post mentioned various other things that are related to better arguments against longtermism, e.g. the epistemic challenge to longtermism that Tarsneyâs paper discusses. But Iâm pretty sure I remember the post not adding to what had already been discussed on those points. (A post that just summarised those other arguments could be useful, but the post didnât set out to be that.)
Hey! Canât respond most of your points now unfortunately, but just a few quick things :)
(Iâm working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the âinconsequential in the grand scheme of thingsâ one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybodyâs misery against somebody elseâs happiness.
Just wanted to flag that I responded to the âproving too muchâ concern here: Proving Too Much
Hey Vaden!
Yeah, I didnât read your other posts (including Proving Too Much), so itâs possible they counter some of my points, clarify your argument more, or the like.
(The reason I didnât read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskillâs paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialismâand/âor maybe placing less moral weight on future generations. It doesnât seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quoteâs arguments for its conclusion seem to just be that we have a stronger âdutyâ to the present, and that âwe should never attempt to balance anybodyâs misery against somebody elseâs happiness.â
(Of course, Iâm not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And âwe should never attempt to balance anybodyâs misery against somebody elseâs happinessâ is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWellâs work, or
Basically irrelevant, if it just means we canât âactively causeâ misery in someone (as opposed to just ânot helpingâ) in order to help others
I think that longtermism doesnât do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/âare between you and most longtermists?