Thoughts on “A case against strong longtermism” (Masrani)
I recently read Vaden Masrani’s post “A case against strong longtermism” for a book/journal club, and noted some reactions to the post as I went. I’m making this post to share slightly-neatened-up versions of those reactions.[1] I’ll split my specific reactions into separate comments, partly so it’s easier for people to reply to specific points.
Masrani’s post centres on critiquing The Case for Strong Longtermism, a paper by Greaves & MacAskill. I recommend reading that paper before reading this post or Masrani’s post. I think the paper is basically very good and very useful, though also flawed in a few ways; I wrote my thoughts on the paper here.
My overall thoughts on Masrani’s post are as follows:
I think that criticism is very often valuable, and especially so for ideas that are promoted by prominent people and are influencing important decisions. Masrani’s post represents a critique of such an idea, so it’s in a category of things I generally appreciate and think we should generally be happy people are producing.
However, my independent impression is that the critique was quite weak and that it involved multiple misunderstandings of the Greaves & MacAskill paper in particular, longtermist ideas and efforts more generally, and also some other philosophical ideas.
Relatedly, my independent impression is that Masrani’s post is probably more likely to cause confusions or misconceptions than it is to usefully advance people’s thinking and discussions.
All that said, I do think that there are various plausible arguments against longtermism that warrant further discussion and research.
Some are discussed in Greaves and MacAskill’s paper.
One of the best such arguments (in my view) is discussed in Tarsney’s great paper “The epistemic challenge to longtermism”.
See also Criticism of effective altruist causes and What are the leading critiques of “longtermism” and related concepts.
(Given these views, I was also pretty tempted to call this A Case Against “A Case Against Longtermism”, but I didn’t want to set off an infinitely recursive loop of increasingly long and snarky titles!)
(Masrani also engaged in the comments section of their original post, wrote some followup posts, and has discussed similar topics on a podcast they host with Ben Chugg. I read most of the comments section on the original post and listened to a 3 hour interview they had with Fin and Luca of the podcast Hear This Idea, and continued to be unimpressed by the critiques provided. But I haven’t read/listened to the other things.)
[1] This seemed better than just making all these comments on Masrani’s post, since I had a lot of comments and that post is from several months ago.
This post does not necessarily represent the views of any of my employers.
- 3 May 2021 15:36 UTC; 2 points) 's comment on A case against strong longtermism by (
Masrani writes:
This is simply false. Greaves and MacAskill actually spend a decent amount of space discussing various alternatives to standard expected value reasoning.
And Greaves also wrote a whole (separate) paper on the related matter of cluelessness.
In any case, “utterly oblivious” seems to me to be both a rude phrasing and a strong claim.
This is true, but seems to be responding to tone rather than the substance of the argument. And given that (I think) we’re interested in the substantive question rather than the social legitimacy of the criticism, I think that it is more useful to engage with the strongest version of the argument.
The actual issue that is relevant here, which isn’t well identified, is that naive expected value fails in a number of ways. Some of these are legitimate criticisms, albeit not well formulated in the paper. Specifically I think that there are valuable points about secondary uncertainty, value of information, and similar issues that are ignored by Greaves and MacAskill in their sketch of the ideal decision-theoretic reasoning.
That’s roughly true for me saying “In any case, “utterly oblivious” seems to me to be both a rude phrasing and a strong claim.”
But I don’t think it’s true for my comment as a whole. Masrani makes specific claims here, and the claims are inaccurate.
I think steelmanning is often really useful. But I think there’s also valuing in noticing when a person/post/whatever is just actually incorrect about something, and in trying to understand what arguments they’re actually making. Some reasons:
Something like epistemic spot-checking / combatting something like Gell-Mann Amnesia
Making it less likely that other people walk away remembering the incorrect claim as actually true
Prioritising which arguments/criticisms to bother engaging with
We obviously shouldn’t choose arguments at random from the entire pool of available arguments in the world, or the entire pool of available arguments on a given topic. It’s probably often more efficient to engage with arguments that are already quite strong, rather than steelmanning less strong arguments that we happen to have stumbled upon
So here I’m actually not solely interested in the substantive questions raised by Masrani’s post, but also in countering misconceptions that I think the post may have generated, and giving indications of why I think people might find it more useful to engage with other criticisms of longtermism instead (e.g., the ones linked to in the body of my post itself).
One final thing worth noting is that this was a quickly produced post adapting notes I’d made anyway. I do think that if I’d spent quite a while on this, it’d be fair to say “Why didn’t you just talk about the best arguments against longtermism, and the points missing from Greaves & MacAskill, instead?”
Yeah, I imagine there are many things in this vicinity that Greaves & MacAskill didn’t cover yet that are relevant to the case for strong longtermism or how to implement it in practice, and I’d be happy to see (a) recommendations of sources where those things are discussed well, and/or (b) other people generate new useful discussions of those things. Ideally applied to longtermism specifically, but general discussions—or general discussions plus a quick explanation of the relevance—seems useful too.
I definitely don’t mean to imply with this post that I see strong longtermism as clearly true; I’m just quickly countering a specific set of misconceptions and objections.
As I mentioned in my other reply, I don’t see as much value in responding to weak-man claims here on the forum, but agree that they can be useful more generally.
Regarding “secondary uncertainty, value of information, and similar issues,” I’d be happy to point to sources that are relevant on these topics generally, especially Morgan and Henrion’s “Uncertainty,” which is a general introduction to some of these ideas, and my RAND dissertation chairs work on policy making under uncertainty, focused on US DOD decisions, but applicable more widely. Unfortunately, I haven’t put together my ideas on this, and don’t know that anyone at GPI has done so either—but I do know that they have engaged with several people at RAND who do this type of work, so it’s on their agenda.
So you’ve shown that Masrani has made a bunch of faulty arguments. But do you think his argument fails overall? i.e. can you refute its central point?
tl;dr: Yes, I think so, for both questions. I think my comments already did this, but that I didn’t make it obvious whether and where this happened, so your question is a useful one.
I like that essay, and also this related Slate Star Codex essay. I also think this might be a generically useful question to ask in response to a post like the one I’ve made. (Though I also think there’s value in epistemic spot checks, and that if you know there are a large number of faulty arguments in X but not whether those were the central arguments in X, that’s still some evidence that the central arguments are faulty too.)
Your comment makes me realise that probably a better structure for this post would’ve been to first summarise my understanding of the central point Masrani was making and Masrani’s key arguments for that, and then say why I disagree with parts of those key arguments, and then maybe also add other disagreements but flag that they’re less central.
The main reason my post is structured as it is is basically just that I tried to relatively quickly adapt notes that I made while reading the post. But here’s a quick attempt at something like that (from re-skimming Masrani’s post now, having originally read it over a month ago)...
---
Masrani writes:
As noted in some of my comments:
The “undefined” bit involves talking a lot about infinities, but neither Greaves and MacAskill’s paper nor standard cases for longtermism rely on infinities
The “undefined” bit also “proves too much”; it basically says we can’t predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
See this comment
Greaves and MacAskill say we shouldn’t have a pure rate of time preference. They don’t say we should engage in no time discounting at all. And Masrani’s arguments for a bias towards the present are unrelated to the question of whether we should have a pure rate of time preference, so they don’t actually counter the paper’s claims.
The paper also significantly misunderstands what strong longtermism and the paper actually implies (e.g., thinking that it definitely entails a focus on existential risk), which is a problem when attempting to argue against strong longtermism and the paper.
I’m not sure whether this last bit should be considered part of refuting the main point, but it seems relevant?
---
(I should note again that I read the post over a month ago and just dipped in quickly to skim for a central point to refute, so it’s possible there were other central points I missed.)
I also expect that the post mentioned various other things that are related to better arguments against longtermism, e.g. the epistemic challenge to longtermism that Tarsney’s paper discusses. But I’m pretty sure I remember the post not adding to what had already been discussed on those points. (A post that just summarised those other arguments could be useful, but the post didn’t set out to be that.)
Hey! Can’t respond most of your points now unfortunately, but just a few quick things :)
(I’m working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the ‘inconsequential in the grand scheme of things’ one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness.
Just wanted to flag that I responded to the ‘proving too much’ concern here: Proving Too Much
Hey Vaden!
Yeah, I didn’t read your other posts (including Proving Too Much), so it’s possible they counter some of my points, clarify your argument more, or the like.
(The reason I didn’t read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill’s paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be about deontology vs consequentialism—and/or maybe placing less moral weight on future generations. It doesn’t seem to be about reasons why strong longtermism would have bad consequences or reasons why longtermist arguments have been unsound (given consequentialism). Specifically, that quote’s arguments for its conclusion seem to just be that we have a stronger “duty” to the present, and that “we should never attempt to balance anybody’s misery against somebody else’s happiness.”
(Of course, I’m not reading the quote in the full context of its source. Maybe those statements were meant more like heuristics about what types of reasoning tend to have better consequences?)
But if I recall correctly, your post mostly focused on arguments that stronglongtermist would have bad consequences or that longtermist arguments have been unsound. And “we should never attempt to balance anybody’s misery against somebody else’s happiness” is either:
Also an argument against any prioritisation of efforts that would help people, including e.g. GiveWell’s work, or
Basically irrelevant, if it just means we can’t “actively cause” misery in someone (as opposed to just “not helping”) in order to help others
I think that longtermism doesn’t do that any more than GiveWell does
So I think that that quote arrives at a similar conclusion to you, but it might show very different reasoning for that conclusion than your reasoning?
Do you have a sense of what the double crux(es) is/are between you and most longtermists?
Masrani seems to take (some of) Greaves and MacAskill’s examples and tentative views about what strong longtermism might indicate one should prioritise as a logically necessary consequence of the moral view itself. In particular, Masrani seems to assume that longtermism necessarily focuses solely on existential risk reduction. But this is actually incorrect.
E.g., Masrani writes: “This assumption is why longtermism states it is always better to work on x-risks than anything else one might want to do to improve the short-term.”
But in reality, what strong longtermism would say one should prioritise depends on various empirical features of the world, as well as aspects of one’s philosophical views other than strong longtermism itself (e.g., one’s views on population ethics).
I think the main two contenders for alternative longtermist priorities are (1) trajectory changes other than existential risks and (2) speeding up development/progress.
Masrani also seems to have not noticed that Greaves and MacAskill’s paper itself notes some things other than existential risk reduction which could be priorities under a strong longtermist perspective, and which could align more with the sort of things GiveWell supports.
E.g., speeding up progress.
I felt uncomfortable with and confused by the section of the post that was about jargon and euphemisms.
E.g., Masrani writes “No single individual is more of an expert in morality than another, and we all have a right to ask for these ideas to be expressed in plain english.”
I definitely think that people sometimes use jargon unnecessarily or fail to explain jargon when they should’ve.
See also 3 suggestions about jargon in EA.
But I also think jargon can be very useful.
And it seemed to me that this section of the post implied that various authors were deliberately being hard to understand in order to make it less likely that they’d be held accountable, or something like that.
(Though it’s possible that I just happened to incorrectly get that “vibe”, and that that wasn’t an implication Masrani intended.)
And I definitely think that some people are more of an expert in morality than other people, in one relevant sense of “expertise”—namely, having thought more about it, having more useful concepts, knowing who the other people to talk to about related things are, etc.
I’m not very confident that these people will tend to have better bottom-line views about morality than other people (though I tentatively think they would).
But I do think I’ll learn more about morality by talking to them than by talking to a randomly chosen member of the world population.
Also worth noting that there are a bunch of other more accessible descriptions of longtermism out there and this is specifically a formal definition aimed at an academic audience (by virtue of being a GPI paper)
Masrani seemed to jump to a strange, uncharitable, and incorrect conclusion about the history of longtermist thought.
Masrani wrote:
I appreciate that Masrani was willing to say “oops” and acknowledged having made a mistake here
But I think that this is a strange mistake to have made
I think it had seemed to me that a single draft status paper led to all of those consequences, I’d find that very surprising, and I’d therefore at least google the term “longtermism” to check if that’s indeed the case
And at that point, I’d quickly find mentions of the term that predate the paper
And many of the links given in the paragraph itself clearly show publication dates that precede the draft paper
And the draft paper itself mentions prior work that makes it clear that this paper wasn’t the first presentation of ideas in this vicinity
So this seems to me like weak evidence of (1) a failure to read the paper carefully and (2) a willingness to quickly jump to uncharitable interpretations.
And some things mentioned in other comments of mine here also seem to me like weak evidence of the same things.
(But I do worry that this comment in particular sounds kind-of personal and attacking, and I apologise if it does—that’s not my intent.)
Masrani says that “longtermism encourages us to treat our fellow brothers and sisters with careless disregard for the next one thousand years, forever”. But strong longtermism being true now doesn’t mean it always was true and always will be true, as Greaves and Macaskill themselves note.
Masrani writes:
But Greaves and MacAskill’s paper explicitly notes that longtermism depends on surprising empirical facts that would not always be true, and that strong longtermism may not have held in the past.
Also, Greaves and MacAskill’s discussion of attractor states provides one obvious way in which strong longtermism could stop being true in future.
I.e., if we reach an attractor state (e.g., extinction, or lock-in of a good future), the future from that point onwards will then be far harder to influence, which would presumably very much weaken the case for strong longtermism at that point.
The case for strong longtermism would also tend to become less compelling as the ratio between the total size of the present and near-term generations and the total size of the far future generations grows larger (unless this is offset by increased ability to influence the future and or predict influence).
This ratio will grow larger as our civilization expands and as we progress towards some “unchangeable limits of the universe”.
At some point, our better ability to influence the near term will presumably outweigh the larger size of the future.
At least in some places, Masrani seems to think or imply that longtermism doesn’t aim to influence any events that occur in the next (say) 1000 years. But in reality, longtermists mostly focus on influencing the further future via influencing things that happen within the next 1000 years (e.g., whether an existential catastrophe occurs).
I.e., most longtermists still care a great deal about the nearer-term future for instrumental reasons (as well as caring somewhat for intrinsic reasons)
This seems to agree with his criticism—that we care about the near-term only as it affects the long term, and can therefore justify ignoring even negative short term consequences of our actions if it leads to future benefits. It argues even more strongly for abandoning otherwise short term beneficial interventions with small longer term impacts.
Obvious examples of how this goes wrong include many economic planning projects of the 20th century, where the short term damage to communities, cities, and livelihoods was justified by incorrect claims about long term growth.
tl;dr: I basically agree with everything except “This seems to agree with his criticism”, because I think (from memory) that Masrani was making a stronger and less valid claim. (Though I’m not totally sure; it may have just been slightly sloppy writing + the other misconception that longtermism is necessarily solely focused on existential risk reduction.)
---
I think there’s a valid claim similar to what Masrani said, and that that could reasonably be seen as a criticism of longtermism given some reasonable moral and/or empirical assumptions. Specifically, I think it’s true that:
The very core of strong longtermism is that idea that the intrinsic importance of the effects of our actions on the long-term future is far greater than the intrinsic importance of the effects of our actions on the near-term, and thus that we should focus on how our actions affect the long-term (or, in other words, the near-term effects we should aim for are whichever ones are best for the long-term)
It seems very likely to be the case that what’s best for the long-term isn’t what’s the very best for the near-term
It seems plausible that what’s best for the long-term is actually net-negative for the near-term
This means acting according to strong longtermism will likely be worse for the near-term than acting according to (EA-style) neartermism, and might be net-negative for the near-term
Various historical cases suggest that “ends justify the means” reasoning and attempts to enact grand, long-term visions often have net negative effects
Though I’m not actually sure how often they had net negative effects vs having net positive effects, how this differs from other types of reasoning and planning, and how analogous those cases are to longtermist efforts in relevant ways)
But this might suggest that, in practice, strong longtermism is more likely to be bad for the near-term than it should be in theory
I would mostly “bite the bullet” of this critique—i.e., say that we can’t prioritise everything at once, and if the case for strong longtermism holds up then it’s appropriate that we prioritise the long-term at the expense of the short-term. And then I do think we should remain vigilant of ways our thinking, priorities, actions, etc. could mirror bad instances of “ends justify the means” etc.
But I could understand someone else being more worried about this objection.
Also, FWIW, I think the Greaves and MacAskill paper maybe fails to acknowledge that strong longtermism actions might be very strange or net-negative from a near-term perspective, rather than just not top priorities. (Though maybe I just forgot where they said this.) I made a related comment here.
---
We could steelman Masrani into making the above sorts of claims and then have a productive discussion. But I think it’s also useful to sometimes just talk about what someone actually said and correct things that are actually misleading or common misconceptions. And I think Masrani was making a stronger claim (though I’m now unsure, as mentioned at the top), which I also think some other people actually believe and which seems like a misconception worth correcting (see also). (To be fair, I think Greaves & MacAskill could maybe have been more careful with some phrasngs to avoid people forming this misconception.)
E.g. Masrani writes:
And:
And:
(But again, I now realise that this might have just been slightly sloppy writing + the x-risk misconception, and also that Greaves & MacAskill may have been slightly sloppy with some phrases as well in a way that contributed to this. So I think this point isn’t especially important as a critique of the post.
Though I guess my original statement still seems appropriate hedged: “At least in some places, Masrani seems to think or imply that longtermism doesn’t aim to influence any events that occur in the next (say) 1000 years.” [emphasis added])
I think we basically agree.
And while I agree that it’s sometimes useful to respond to what was actually said, rather than the best possible claims, that type of post is useful as a public response, rather than useful for discussion of the ideas. Given that the forum is for discussion about EA and EA ideas, I’d prefer to use steelman arguments where possible to better understand the questions at hand.
(I’ll put a bundle of smaller, disconnected reactions in this one thread.)
Masrani writes:
But empirical evidence and common sense actually clearly demonstrate that we can predict the future with better than chance accuracy, at least in some domains, and sometimes very easily
E.g., I can predict that the sun will rise tomorrow
See also Phil Tetlock’s work
Masrani seems to confuse (1) pure time discounting / a pure rate of time preference with (2) time discounting for other reasons (e.g., due to the possibility that the future won’t come to pass due to a catastrophe; see Greaves).
In particular, Masrani seems to claim that Greaves and MacAskill’s paper is wrong to reject pure time discounting, but bases that claim partly on the fact that there could be a catastrophe in future (which is a separate matter from pure time discounting).
E.g., Masrani writes: “We should be biased towards the present for the simple reason that tomorrow may not arrive. The further out into the future we go, the less certain things become, and the smaller the chance is that we’ll actually make it there. Preferring good things to happen sooner rather than later follows directly from the finitude of life.”
---
Another, separate point about discounting:
Masrani writes:
But as far as I can tell, this is false, at least taken if literally; instead, how concerned one should be about a given moment in time depends in part on what’s happening at the time (e.g. how many moral patients there are, and what they’re experiencing).
Masrani writes:
But we very often predict things that depend on things we don’t fully understand, and with above chance accuracy.
E.g. I can often predict with decent success what someone will do, even without knowing everything they know, and even when some things that they know and that I don’t know are relevant to what I’ll do.
To be clear, I’d agree with lots of weaker claims in this vicinity, like that predicting the future is very hard, and that one thing that makes it harder is that we lack some knowledge which future people will have (e.g., about the nature of future technologies).
But saying we can’t ever predict the future at all is too strong.
Yes, this seems to be a problem, but it’s also a problem with naive expected value thinking that prioritizes predictions without looking at adaptive planning or value of information. And I think Greaves and MacAskill don’t really address these issues sufficiently in their paper—though I agree that they have considered them and are open to further refinement of their ideas.
But I don’t beleive that it’s clear we predict things about the long term “with above chance accuracy.” If we do, it’s not obvious how to construct the baseline probability we would expect to outperform.
Critically, the requirement for this criticism to be correct is that our predictions are not good enough to point to interventions that have higher expected benefit than more-certain ones, and this seems very plausible. Constructing the case for whether or not it is true seems valuable, but mostly unexplored.
Yeah, I agree with your first two paragraphs. (I don’t think I understand the third one; feel free to restate that, if you’ve got time.)
In particular, it’s worth noting that I agree that it’s not currently clear that we can predict (decision-relevant) things about the long-term with above chance accuracy (see also the long-range forecasting tag). Above, I merely claimed that “we very often predict things that depend on things we don’t fully understand, and with above chance accuracy”—i.e., I didn’t specify long-term.
It does seem very likely to me that it’s possible to predict decision-relevant things about the long-term future at least slightly better than complete guesswork. But it seems plausible to me that our predictive power becomes weak enough that that outweighs the increased scale of the future, such that we should focus on near-term effects instead. (I have in mind basically Tarsney’s way of framing the topic from his “Epistemic Challenge” paper. There are also of course factors other than those two things that could change the balance, like population ethical views or various forms of risk-aversion.)
This seems like a super interesting and important topic, both for getting more clarity on whether we should adopt strong longtermism and on how to act given longtermism.
---
I specified “decision-relevant” above because of basically the following points Tarsney makes in his Epistemic Challenge paper:
Agree that this is important, and it’s something I’ve been thinking about for a while. But the last paragraph was just trying to explain what the paper said (more clearly) were evaluative practical predictions. I just think about that in more decision-theoretic terms, and if I was writing about this more, would want to formulate it that way.
Masrani focuses quite a bit on the idea that longtermism relies on comparisons to an infinite amount of potential future good. But Greaves and MacAskill’s paper doesn’t actually mention infinity at any point, and neither their argument nor the othe standard arguments I’ve seen rely at all on infinities.
E.g., Masrani writes: “By “this observation” I just mean the fact that longtermism is a really really bad idea because it lets you justify present day suffering forever, by always comparing it to an infinite amount of potential future good (forever).”
(I won’t say more on this here, since the comments section of the link-post for Masrani’s post already contains an extensive discussion of whether and how infinities might be relevant relation to longtermism.)
Masrani writes:
But I think that this is simply false: our predictions (as well as other credences) can differ in how “resilient” they are
See e.g. Credal resilience and Use resilience, instead of imprecision, to communicate uncertainty
Masrani seems to sort-of implicitly assume that (a) people will have strong ulterior motives to bend the ideas of strong longtermism towards things that they want to believe or support anyway (for non-altruistic reasons), and thus (b) we must guard against a view or a style of reasoning which is vulnerable to being bent in that way. But I think it would be more productive and accurate to basically “assume good faith”.
I think longtermism is actually less about “lifting some constraints” or letting us “get away with” something, and more about saying what we should do in certain circumstances.
Relatedly, strong longtermism doesn’t say the short term suffering doesn’t matter and therefore we can do whatever we want; instead, it says the long term matters even more, and thus we are obligated to focus on helping the future.
And, empirically, it really doesn’t seem like most people who identify with longtermism are mostly bending strong longtermism towards things they wanted to believe or support anyway.
(It does seem likely that there’s some degree of ulterior motives and rationalisation, but not that that’s a dominant force.)
Indeed, many of these people have switched their priorities due to longtermism, find their new priorities less emotionally resonant, and may have faced disruptions to their social or work lives due to the switch they made.
See e.g. Why I find longtermism hard, and what keeps me motivated
This data doesn’t disprove the idea that all of this happened due to ulterior motives or rationalisation (e.g., maybe the dominant motive was to conform to the beliefs of some prestigious-seeming group), but it does seem to be some evidence against that theory.
This ties into another point: Many of the framings and phrasings in Masrani’s post seem quite “loaded”, in the sense of making something sound bad partly just through strong connotations or rhetoric rather than explicit arguments in neutral terms.
E.g., the author writes “I think, however, that longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever. The stakes are really high here.”
But I think that most longtermists aren’t trying to fiddle with the numbers in order to squash funding for things that are cost-effective; most of them are mostly trying to actually work out what’s true and use that info to improve the world.
E.g., the author writes “To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it—even contribute to it if we wish—because it doesn’t matter. It’s negligible. A mere rounding error.”
I do think that “inconsequential in the grand scheme of things” is indeed in some sense essentially an implication of longtermism. But that seems like a quite misleading way of framing it.
I think the spirit of the longtermist view is more along the lines of thinking that what we already thought mattered still matters a lot, but also that other things matter surprisingly and hugely much, such that there may be a strong reason to strongly prioritise those other things.
So the spirit is more like caring about additional huge things, rather than being callous about things we used to care about.
Though I do acknowledge that those different framings can reach similar conclusions in practice, and also that longtermism is sometimes framed in a way that is more callous/dismissive than I’m suggesting here.
This can happen unconsciously, though, e.g. confirmation bias, or whenever there’s arbitrariness or “whim”, e.g. priors or how you weight different considerations with little evidence. The weaker the evidence, the more prone to bias, and there’s self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly. (EDIT: see the optimizer’s curse.) This is basically something Greaves and MacAskill acknowledge in their paper, although also argue applies to short-term-focused interventions:
That being said, I suspect it’s possible in practice to hedge against these indirect effects from short-term-focused interventions.
I haven’t read your post, so can’t comment.
That said, FWIW, my independent impression is that “cluelessness” isn’t a useful concept and that the common ways the concept has been used either to counter neartermism or counter longtermism are misguided. (I write about this here and here.) So I guess that that’s probably consistent with your conclusion, though maybe by a different road. (I prefer to use the sort of analysis in Tarsney’s epistemic challenge paper, and I think that that pushes in favour of either longtermism or further research on longtermism vs neartermism, though I definitely acknowledge room for debate on that.)
I think Tarsney’s paper does not address/avoid cluelessness, or at least its spirit, i.e., the arbitrary weighting of different considerations, since
You still need to find a specific intervention that you predict ex ante pushes you towards one attractor and away from another, and you have more reason to believe it does this than it goes in the opposite direction (in expectation, say). If you have more reason to believe this due to arbitrary weights, which could reasonably have been chosen to have the intervention backfire, this is not a good epistemic state to be in. For example, is the AI safety work we’re doing now backfiring? This could be due to, for example:
creating a false sense of security,
publishing the results of the GPT models, demonstrating AI capabilities and showing the world how much further we can already push it, and therefore accelerating AI development, or
slowing AI development more in countries that care more about safety than those that don’t care much, risking a much worse AGI takeover if it matters who builds it first.
You still need to predict which of the attractors is ex ante ethically better, which again involves both arbitrary empirical weights and arbitrary ethical weights (moral uncertainty). You might find the choice to be sensitive to something arbitrary that could reasonably go either way. Is extinction actually bad, considering the possibility of s-risks?
Does some s-risk (e.g. AI safety, authoritarianism) work reduce some extinction risks and so increase other s-risks, and how do we weigh those possibilities?
I worry that research on longtermism vs neartermism (like Tarsney’s paper) just ignores these problems, since you really need to deal with somewhat specific interventions, because of the different considerations involved. In my view, (strong) longtermism is only true if you actually identify an intervention that you can only reasonably believe does (much) more net good in the far future in expectation than short-term-focused alternatives do in the short term in expectation, or, roughly, that you can only reasonably believe does (much) more good than harm (in the far future) in expectation. This requires careful analysis of a specific intervention, and we may not have the right information now or ever to confirm that a particular intervention satisfies these conditions. To every longtermist intervention I’ve tried to come up with specific objections to, I’ve come up with objections that I think could reasonably push it into doing more harm than good in expectation.
Of course, what should “reasonable belief” mean? How do we decide which beliefs are reasonable and which ones aren’t (and the degree of reasonableness, if it’s a fuzzy concept)?
Basically, I agree that longtermist interventions could have these downside risks, but:
I think we should basically just factor that into their expected value (while using various best practices and avoiding naive approaches)
I do acknowledge that this is harder than that makes it sound, and that people often do a bad job. But...
I think that these same points also apply to neartermist interventions
Though with less uncertainty about at least the near-term effects, of course
I think this gets at part of what comes to mind when I hear objections like this.
Another part is: I think we could say all of that with regards to literally any decision—we’d often be less uncertain, and it might be less reasonable to think the decision would be net negative or astronomically so, but I think it just comes in degrees, rather than applying strongly to some scenarios and not at all applying to others. One way to put this is that I think basically every decision meets the criteria for complex cluelessness (as I argued in the above-mentioned links: here and here).
But really I think that (partly for that reason) we should just ditch the term “complex cluelessness” entirely, and think in terms of things like credal resilience, downside risk, skeptical priors, model uncertainty, model combination and adjustment, the optimizer’s curse, best practice for forecasting, and expected values given all that.
Here I acknowledge that I’m making some epistemological, empirical, decision-theoretic, and/or moral claims/assumptions that I’m aware various people who’ve thought about related topics would contest (including yourself and maybe Greaves, both of whom have clearly “done your homework”). I’m also aware that I haven’t fully justified these stances here, but it seemed useful to gesture roughly at my conclusions and reasoning anyway.
I do think that these considerations mostly push against longtermism and in favour of neartermism. (Caveats include things like being very morally uncertain, such that e.g. reducing poverty or reducing factory farming could easily be bad, such that maybe the best thing is to maintain option value and maximise the chance of a long reflection. But this also reduces option value in some ways. And then one can counter that point, and so on.) But I think we should see this all as a bunch of competing quantitative factors, rather than as absolutes and binaries.
(Also, as noted elsewhere, I currently think longtermism—or further research on whether to be longtermist—comes out ahead of neartermism, all-things-considered, but I’m unsure on that.)
I don’t think it’s usually reasonable to choose only one expected value estimate, though, and this to me is the main consequence of cluelessness. Doing your best will still leave a great deal of ambiguity if you’re being honest about what beliefs you think would be reasonable to have, despite not being your own fairly arbitrary best guess (often I don’t even have a best guess, precisely because of how arbitrary that seems). Sensitivity analysis seems important.
I would say complex cluelessness basically is just sensitivity of recommendations to model uncertainty. The problem is that it’s often too arbitrary to come to a single estimate by combining models. Two people with access to all of the same information and even the same ethical views (same fundamental moral uncertainty and methods for dealing with them) could still disagree about whether an intervention is good or bad, or which of two interventions is best, depending basically on whims (priors, arbitrary weightings).
At least substantial parts of our credences are not very sensitive to arbitrariness with shorttermist interventions with good evidence, even if on the whole the expected value is, but the latter is what I hope hedging could be used to control. Maybe you can do this just with longtermist interventions, though. A portfolio of interventions can be less ambiguous than each intervention in it. (This is what my hedging post is about.)
tl;dr: I basically agree with your first paragraph, but think that:
that’s mostly consistent with my prior comment
that doesn’t represent a strong argument against longtermism
Masrani’s claims/language go beyond the defensible claims you’re making
Agreed. But:
I think that a small to moderate degree of such bias is something I acknowledged in my prior comment
(And I intended to imply that it could occur unconsciously, though I didn’t explicitly state that)
I think unconscious bias is always a possibility, including in relation to whatever alternative to longtermism one might endorse
See also Caution on Bias Arguments and Beware Isolated Demands for Rigor
That said, I think “The weaker the evidence, the more prone to bias” is true (all other factors held constant), and I think that that does create one reason why bias may push in favour of longtermism more than in favour of other things.
I think I probably should’ve acknowledged that.
But there’s still the fact that there are so many other sources of bias, factors exacerbating or mitigating bias, etc. So it’s still far from obvious which group of people (sorted by current cause priorities) is more biased overall in their cause prioritisation.
And I think that there’s some value in trying to figure that out, but that should be done and discussed very carefully, and is probably less useful than other discussions/research that could inform cause priorities.
E.g., scope neglect, identifiable victim effects, and confirmation bias when most people first enter EA (since more were previously interested in global health & dev than in longtermism) bias against longtermism
But a desire to conform to what’s currently probably more “trendy” in EA biases towards longtermism
And so on
Less important: It seems far from obvious to me whether there’s substantial truth in the claim that “there’s self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly”, even assuming bias is a big part of the story.
E.g., I think things along the lines of conformity and deference are more likely culprits for “unwarranted/unjustified” shifts towards longtermism than confirmation bias are
It seems like a very large portion of longtermists were originally focused on other areas and were surprised to find themselves ending up longtermist, which makes confirmation bias seem like an unlikely explanation
Compared to what you’re suggesting, Masrani—at least in some places—seems to imply something more extreme, more conscious, and/or more explicitly permitted by longtermism itself (rather than just general biases that are exacerbated by having limited info)
E.g., “by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever.” [emphasis added]
E.g., “To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it—even contribute to it if we wish—because it doesn’t matter.” [emphasis added]
This very much sounds to me like “assuming bad faith” in a way that I think is both unproductive and inaccurate for most actual longtermists
I.e., this sounds quite different to “These people are really trying to do what’s best. But they’re subject to cognitive biases and are disproportionately affected by the beliefs of the people they happen to be around or look up to—as are we all. And there are X, Y, Z specific reasons to think those effects are leading these people to be more inclined towards longtermism than they should be.”