tl;dr: Yes, I think so, for both questions. I think my comments already did this, but that I didn’t make it obvious whether and where this happened, so your question is a useful one.
I like that essay, and also this related Slate Star Codex essay. I also think this might be a generically useful question to ask in response to a post like the one I’ve made. (Though I also think there’s value in epistemic spot checks, and that if you know there are a large number of faulty arguments in X but not whether those were the central arguments in X, that’s still some evidence that the central arguments are faulty too.)
Your comment makes me realise that probably a better structure for this post would’ve been to first summarise my understanding of the central point Masrani was making and Masrani’s key arguments for that, and then say why I disagree with parts of those key arguments, and then maybe also add other disagreements but flag that they’re less central.
The main reason my post is structured as it is is basically just that I tried to relatively quickly adapt notes that I made while reading the post. But here’s a quick attempt at something like that (from re-skimming Masrani’s post now, having originally read it over a month ago)...
---
Masrani writes:
In Section 2 the authors helpfully state the two assumptions which longtermism needs to get off the ground:
In expectation, the future is vast in size. In particular they assume the future will contain at least 1 quadrillion (1015) beings in expectation.
We should not be biased towards the present.
I think both of these assumptions are false, and in fact:
In expectation, the future is undefined.
We should absolutely be biased towards the present.
We’ll discuss both in turn after an introduction to expected values.
As noted in some of my comments:
The “undefined” bit involves talking a lot about infinities, but neither Greaves and MacAskill’s paper nor standard cases for longtermism rely on infinities
The “undefined” bit also “proves too much”; it basically says we can’t predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
Greaves and MacAskill say we shouldn’t have a pure rate of time preference. They don’t say we should engage in no time discounting at all. And Masrani’s arguments for a bias towards the present are unrelated to the question of whether we should have a pure rate of time preference, so they don’t actually counter the paper’s claims.
The paper also significantly misunderstands what strong longtermism and the paper actually implies (e.g., thinking that it definitely entails a focus on existential risk), which is a problem when attempting to argue against strong longtermism and the paper.
I’m not sure whether this last bit should be considered part of refuting the main point, but it seems relevant?
---
(I should note again that I read the post over a month ago and just dipped in quickly to skim for a central point to refute, so it’s possible there were other central points I missed.)
I also expect that the post mentioned various other things that are related to better arguments against longtermism, e.g. the epistemic challenge to longtermism that Tarsney’s paper discusses. But I’m pretty sure I remember the post not adding to what had already been discussed on those points. (A post that just summarised those other arguments could be useful, but the post didn’t set out to be that.)
Hey! Can’t respond most of your points now unfortunately, but just a few quick things :)
(I’m working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the ‘inconsequential in the grand scheme of things’ one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness.
The “undefined” bit also “proves too much”; it basically says we can’t predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
Just wanted to flag that I responded to the ‘proving too much’ concern here: Proving Too Much
Yeah, I didn’t read your other posts (including Proving Too Much), so it’s possible they counter some of my points, clarify your argument more, or the like.
(The reason I didn’t read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill’s paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialism—and/or maybe placing less moral weight on future generations. It doesn’t seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quote’s arguments for its conclusion seem to just be that we have a stronger “duty” to the present, and that “we should never attempt to balance anybody’s misery against somebody else’s happiness.”
(Of course, I’m not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And “we should never attempt to balance anybody’s misery against somebody else’s happiness” is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWell’s work, or
Basically irrelevant, if it just means we can’t “actively cause” misery in someone (as opposed to just “not helping”) in order to help others
I think that longtermism doesn’t do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/are between you and most longtermists?
So you’ve shown that Masrani has made a bunch of faulty arguments. But do you think his argument fails overall? i.e. can you refute its central point?
tl;dr: Yes, I think so, for both questions. I think my comments already did this, but that I didn’t make it obvious whether and where this happened, so your question is a useful one.
I like that essay, and also this related Slate Star Codex essay. I also think this might be a generically useful question to ask in response to a post like the one I’ve made. (Though I also think there’s value in epistemic spot checks, and that if you know there are a large number of faulty arguments in X but not whether those were the central arguments in X, that’s still some evidence that the central arguments are faulty too.)
Your comment makes me realise that probably a better structure for this post would’ve been to first summarise my understanding of the central point Masrani was making and Masrani’s key arguments for that, and then say why I disagree with parts of those key arguments, and then maybe also add other disagreements but flag that they’re less central.
The main reason my post is structured as it is is basically just that I tried to relatively quickly adapt notes that I made while reading the post. But here’s a quick attempt at something like that (from re-skimming Masrani’s post now, having originally read it over a month ago)...
---
Masrani writes:
As noted in some of my comments:
The “undefined” bit involves talking a lot about infinities, but neither Greaves and MacAskill’s paper nor standard cases for longtermism rely on infinities
The “undefined” bit also “proves too much”; it basically says we can’t predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
See this comment
Greaves and MacAskill say we shouldn’t have a pure rate of time preference. They don’t say we should engage in no time discounting at all. And Masrani’s arguments for a bias towards the present are unrelated to the question of whether we should have a pure rate of time preference, so they don’t actually counter the paper’s claims.
The paper also significantly misunderstands what strong longtermism and the paper actually implies (e.g., thinking that it definitely entails a focus on existential risk), which is a problem when attempting to argue against strong longtermism and the paper.
I’m not sure whether this last bit should be considered part of refuting the main point, but it seems relevant?
---
(I should note again that I read the post over a month ago and just dipped in quickly to skim for a central point to refute, so it’s possible there were other central points I missed.)
I also expect that the post mentioned various other things that are related to better arguments against longtermism, e.g. the epistemic challenge to longtermism that Tarsney’s paper discusses. But I’m pretty sure I remember the post not adding to what had already been discussed on those points. (A post that just summarised those other arguments could be useful, but the post didn’t set out to be that.)
Hey! Can’t respond most of your points now unfortunately, but just a few quick things :)
(I’m working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the ‘inconsequential in the grand scheme of things’ one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness.
Just wanted to flag that I responded to the ‘proving too much’ concern here: Proving Too Much
Hey Vaden!
Yeah, I didn’t read your other posts (including Proving Too Much), so it’s possible they counter some of my points, clarify your argument more, or the like.
(The reason I didn’t read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill’s paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialism—and/or maybe placing less moral weight on future generations. It doesn’t seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quote’s arguments for its conclusion seem to just be that we have a stronger “duty” to the present, and that “we should never attempt to balance anybody’s misery against somebody else’s happiness.”
(Of course, I’m not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And “we should never attempt to balance anybody’s misery against somebody else’s happiness” is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWell’s work, or
Basically irrelevant, if it just means we can’t “actively cause” misery in someone (as opposed to just “not helping”) in order to help others
I think that longtermism doesn’t do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/are between you and most longtermists?