Masrani seems to sort-of implicitly assume that (a) people will have strong ulterior motives to bend the ideas of strong longtermism towards things that they want to believe or support anyway (for non-altruistic reasons), and thus (b) we must guard against a view or a style of reasoning which is vulnerable to being bent in that way. But I think it would be more productive and accurate to basically “assume good faith”.
I think longtermism is actually less about “lifting some constraints” or letting us “get away with” something, and more about saying what we should do in certain circumstances.
Relatedly, strong longtermism doesn’t say the short term suffering doesn’t matter and therefore we can do whatever we want; instead, it says the long term matters even more, and thus we are obligated to focus on helping the future.
And, empirically, it really doesn’t seem like most people who identify with longtermism are mostly bending strong longtermism towards things they wanted to believe or support anyway.
(It does seem likely that there’s some degree of ulterior motives and rationalisation, but not that that’s a dominant force.)
Indeed, many of these people have switched their priorities due to longtermism, find their new priorities less emotionally resonant, and may have faced disruptions to their social or work lives due to the switch they made.
This data doesn’t disprove the idea that all of this happened due to ulterior motives or rationalisation (e.g., maybe the dominant motive was to conform to the beliefs of some prestigious-seeming group), but it does seem to be some evidence against that theory.
This ties into another point: Many of the framings and phrasings in Masrani’s post seem quite “loaded”, in the sense of making something sound bad partly just through strong connotations or rhetoric rather than explicit arguments in neutral terms.
E.g., the author writes “I think, however, that longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever. The stakes are really high here.”
But I think that most longtermists aren’t trying to fiddle with the numbers in order to squash funding for things that are cost-effective; most of them are mostly trying to actually work out what’s true and use that info to improve the world.
E.g., the author writes “To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it—even contribute to it if we wish—because it doesn’t matter. It’s negligible. A mere rounding error.”
I do think that “inconsequential in the grand scheme of things” is indeed in some sense essentially an implication of longtermism. But that seems like a quite misleading way of framing it.
I think the spirit of the longtermist view is more along the lines of thinking that what we already thought mattered still matters a lot, but also that other things matter surprisingly and hugely much, such that there may be a strong reason to strongly prioritise those other things.
So the spirit is more like caring about additional huge things, rather than being callous about things we used to care about.
Though I do acknowledge that those different framings can reach similar conclusions in practice, and also that longtermism is sometimes framed in a way that is more callous/dismissive than I’m suggesting here.
Masrani seems to sort-of implicitly assume that (a) people will have strong ulterior motives to bend the ideas of strong longtermism towards things that they want to believe or support anyway (for non-altruistic reasons), and thus (b) we must guard against a view or a style of reasoning which is vulnerable to being bent in that way. But I think it would be more productive and accurate to basically “assume good faith”.
This can happen unconsciously, though, e.g. confirmation bias, or whenever there’s arbitrariness or “whim”, e.g. priors or how you weight different considerations with little evidence. The weaker the evidence, the more prone to bias, and there’s self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly. (EDIT: see the optimizer’s curse.) This is basically something Greaves and MacAskill acknowledge in their paper, although also argue applies to short-term-focused interventions:
Finally, it might seem at first sight that ambiguity aversion would undermine the case for strong longtermism. In contemplating options like those discussed in section 3, the first-order task is to assess what are the rational credences that some given intervention to (say) reduce extinction risk, or reduce the chance of major global conflict, or increase the safety of artificial intelligence, and so on, would lead to a large positive payoff in the long run. The thing that is most striking about this task is that it is hard. There is very little data to guide credences; one has an uncomfortable feeling of picking numbers, for the purposes of guiding important decisions, somewhat arbitrarily. That is, such interventions generate significant ambiguity. However, on reflection, attempts to optimise the short run also generate significant ambiguity, since it is very unclear what might be the long run consequences of (say) bed net distribution (Greaves, 2016). In addition, we again face the issue of whether one should be ambiguity averse with respect to the state of the world, or instead with respect to the difference one makes oneself to that state. We explore these issues in a related paper (Greaves, MacAskill and Mogensen, manuscript).
That said, FWIW, my independent impression is that “cluelessness” isn’t a useful concept and that the common ways the concept has been used either to counter neartermism or counter longtermism are misguided. (I write about this here and here.) So I guess that that’s probably consistent with your conclusion, though maybe by a different road. (I prefer to use the sort of analysis in Tarsney’s epistemic challenge paper, and I think that that pushes in favour of either longtermism or further research on longtermism vs neartermism, though I definitely acknowledge room for debate on that.)
I think Tarsney’s paper does not address/avoid cluelessness, or at least its spirit, i.e., the arbitrary weighting of different considerations, since
You still need to find a specific intervention that you predict ex ante pushes you towards one attractor and away from another, and you have more reason to believe it does this than it goes in the opposite direction (in expectation, say). If you have more reason to believe this due to arbitrary weights, which could reasonably have been chosen to have the intervention backfire, this is not a good epistemic state to be in. For example, is the AI safety work we’re doing now backfiring? This could be due to, for example:
creating a false sense of security,
publishing the results of the GPT models, demonstrating AI capabilities and showing the world how much further we can already push it, and therefore accelerating AI development, or
slowing AI development more in countries that care more about safety than those that don’t care much, risking a much worse AGI takeover if it matters who builds it first.
You still need to predict which of the attractors is ex ante ethically better, which again involves both arbitrary empirical weights and arbitrary ethical weights (moral uncertainty). You might find the choice to be sensitive to something arbitrary that could reasonably go either way. Is extinction actually bad, considering the possibility of s-risks?
Does some s-risk (e.g. AI safety, authoritarianism) work reduce some extinction risks and so increase other s-risks, and how do we weigh those possibilities?
I worry that research on longtermism vs neartermism (like Tarsney’s paper) just ignores these problems, since you really need to deal with somewhat specific interventions, because of the different considerations involved. In my view, (strong) longtermism is only true if you actually identify an intervention that you can only reasonably believe does (much) more net good in the far future in expectation than short-term-focused alternatives do in the short term in expectation, or, roughly, that you can only reasonably believe does (much) more good than harm (in the far future) in expectation. This requires careful analysis of a specific intervention, and we may not have the right information now or ever to confirm that a particular intervention satisfies these conditions. To every longtermist intervention I’ve tried to come up with specific objections to, I’ve come up with objections that I think could reasonably push it into doing more harm than good in expectation.
Of course, what should “reasonable belief” mean? How do we decide which beliefs are reasonable and which ones aren’t (and the degree of reasonableness, if it’s a fuzzy concept)?
Basically, I agree that longtermist interventions could have these downside risks, but:
I think we should basically just factor that into their expected value (while using various best practices and avoiding naive approaches)
I do acknowledge that this is harder than that makes it sound, and that people often do a bad job. But...
I think that these same points also apply to neartermist interventions
Though with less uncertainty about at least the near-term effects, of course
Of course, what should “reasonable belief” mean? How do we decide which beliefs are reasonable and which ones aren’t (and the degree of reasonableness, if it’s a fuzzy concept)?
I think this gets at part of what comes to mind when I hear objections like this.
Another part is: I think we could say all of that with regards to literally any decision—we’d often be less uncertain, and it might be less reasonable to think the decision would be net negative or astronomically so, but I think it just comes in degrees, rather than applying strongly to some scenarios and not at all applying to others. One way to put this is that I think basically every decision meets the criteria for complex cluelessness (as I argued in the above-mentioned links: here and here).
But really I think that (partly for that reason) we should just ditch the term “complex cluelessness” entirely, and think in terms of things like credal resilience, downside risk, skeptical priors, model uncertainty, model combination and adjustment, the optimizer’s curse, best practice for forecasting, and expected values given all that.
Here I acknowledge that I’m making some epistemological, empirical, decision-theoretic, and/or moral claims/assumptions that I’m aware various people who’ve thought about related topics would contest (including yourself and maybe Greaves, both of whom have clearly “done your homework”). I’m also aware that I haven’t fully justified these stances here, but it seemed useful to gesture roughly at my conclusions and reasoning anyway.
I do think that these considerations mostly push against longtermism and in favour of neartermism. (Caveats include things like being very morally uncertain, such that e.g. reducing poverty or reducing factory farming could easily be bad, such that maybe the best thing is to maintain option value and maximise the chance of a long reflection. But this also reduces option value in some ways. And then one can counter that point, and so on.) But I think we should see this all as a bunch of competing quantitative factors, rather than as absolutes and binaries.
(Also, as noted elsewhere, I currently think longtermism—or further research on whether to be longtermist—comes out ahead of neartermism, all-things-considered, but I’m unsure on that.)
I don’t think it’s usually reasonable to choose only one expected value estimate, though, and this to me is the main consequence of cluelessness. Doing your best will still leave a great deal of ambiguity if you’re being honest about what beliefs you think would be reasonable to have, despite not being your own fairly arbitrary best guess (often I don’t even have a best guess, precisely because of how arbitrary that seems). Sensitivity analysis seems important.
But really I think that (partly for that reason) we should just ditch the term “complex cluelessness” entirely, and think in terms of things like credal resilience, downside risk, skeptical priors, model uncertainty, model combination and adjustment, the optimizer’s curse, best practice for forecasting, and expected values given all that.
I would say complex cluelessness basically is just sensitivity of recommendations to model uncertainty. The problem is that it’s often too arbitrary to come to a single estimate by combining models. Two people with access to all of the same information and even the same ethical views (same fundamental moral uncertainty and methods for dealing with them) could still disagree about whether an intervention is good or bad, or which of two interventions is best, depending basically on whims (priors, arbitrary weightings).
At least substantial parts of our credences are not very sensitive to arbitrariness with shorttermist interventions with good evidence, even if on the whole the expected value is, but the latter is what I hope hedging could be used to control. Maybe you can do this just with longtermist interventions, though. A portfolio of interventions can be less ambiguous than each intervention in it. (This is what my hedging post is about.)
tl;dr: I basically agree with your first paragraph, but think that:
that’s mostly consistent with my prior comment
that doesn’t represent a strong argument against longtermism
Masrani’s claims/language go beyond the defensible claims you’re making
This can happen unconsciously, though, e.g. confirmation bias, or whenever there’s arbitrariness or “whim”, e.g. priors or how you weight different considerations with little evidence. The weaker the evidence, the more prone to bias
Agreed. But:
I think that a small to moderate degree of such bias is something I acknowledged in my prior comment
(And I intended to imply that it could occur unconsciously, though I didn’t explicitly state that)
I think unconscious bias is always a possibility, including in relation to whatever alternative to longtermism one might endorse
That said, I think “The weaker the evidence, the more prone to bias” is true (all other factors held constant), and I think that that does create one reason why bias may push in favour of longtermism more than in favour of other things.
I think I probably should’ve acknowledged that.
But there’s still the fact that there are so many other sources of bias, factors exacerbating or mitigating bias, etc. So it’s still far from obvious which group of people (sorted by current cause priorities) is more biased overall in their cause prioritisation.
And I think that there’s some value in trying to figure that out, but that should be done and discussed very carefully, and is probably less useful than other discussions/research that could inform cause priorities.
E.g., scope neglect, identifiable victim effects, and confirmation bias when most people first enter EA (since more were previously interested in global health & dev than in longtermism) bias against longtermism
But a desire to conform to what’s currently probably more “trendy” in EA biases towards longtermism
And so on
Less important: It seems far from obvious to me whether there’s substantial truth in the claim that “there’s self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly”, even assuming bias is a big part of the story.
E.g., I think things along the lines of conformity and deference are more likely culprits for “unwarranted/unjustified” shifts towards longtermism than confirmation bias are
It seems like a very large portion of longtermists were originally focused on other areas and were surprised to find themselves ending up longtermist, which makes confirmation bias seem like an unlikely explanation
Compared to what you’re suggesting, Masrani—at least in some places—seems to imply something more extreme, more conscious, and/or more explicitly permitted by longtermism itself (rather than just general biases that are exacerbated by having limited info)
E.g., “by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever.” [emphasis added]
E.g., “To reiterate, longtermism gives us permission tocompletely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it—even contribute to it if we wish—because it doesn’t matter.” [emphasis added]
This very much sounds to me like “assuming bad faith” in a way that I think is both unproductive and inaccurate for most actual longtermists
I.e., this sounds quite different to “These people are really trying to do what’s best. But they’re subject to cognitive biases and are disproportionately affected by the beliefs of the people they happen to be around or look up to—as are we all. And there are X, Y, Z specific reasons to think those effects are leading these people to be more inclined towards longtermism than they should be.”
Masrani seems to sort-of implicitly assume that (a) people will have strong ulterior motives to bend the ideas of strong longtermism towards things that they want to believe or support anyway (for non-altruistic reasons), and thus (b) we must guard against a view or a style of reasoning which is vulnerable to being bent in that way. But I think it would be more productive and accurate to basically “assume good faith”.
I think longtermism is actually less about “lifting some constraints” or letting us “get away with” something, and more about saying what we should do in certain circumstances.
Relatedly, strong longtermism doesn’t say the short term suffering doesn’t matter and therefore we can do whatever we want; instead, it says the long term matters even more, and thus we are obligated to focus on helping the future.
And, empirically, it really doesn’t seem like most people who identify with longtermism are mostly bending strong longtermism towards things they wanted to believe or support anyway.
(It does seem likely that there’s some degree of ulterior motives and rationalisation, but not that that’s a dominant force.)
Indeed, many of these people have switched their priorities due to longtermism, find their new priorities less emotionally resonant, and may have faced disruptions to their social or work lives due to the switch they made.
See e.g. Why I find longtermism hard, and what keeps me motivated
This data doesn’t disprove the idea that all of this happened due to ulterior motives or rationalisation (e.g., maybe the dominant motive was to conform to the beliefs of some prestigious-seeming group), but it does seem to be some evidence against that theory.
This ties into another point: Many of the framings and phrasings in Masrani’s post seem quite “loaded”, in the sense of making something sound bad partly just through strong connotations or rhetoric rather than explicit arguments in neutral terms.
E.g., the author writes “I think, however, that longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever. The stakes are really high here.”
But I think that most longtermists aren’t trying to fiddle with the numbers in order to squash funding for things that are cost-effective; most of them are mostly trying to actually work out what’s true and use that info to improve the world.
E.g., the author writes “To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it—even contribute to it if we wish—because it doesn’t matter. It’s negligible. A mere rounding error.”
I do think that “inconsequential in the grand scheme of things” is indeed in some sense essentially an implication of longtermism. But that seems like a quite misleading way of framing it.
I think the spirit of the longtermist view is more along the lines of thinking that what we already thought mattered still matters a lot, but also that other things matter surprisingly and hugely much, such that there may be a strong reason to strongly prioritise those other things.
So the spirit is more like caring about additional huge things, rather than being callous about things we used to care about.
Though I do acknowledge that those different framings can reach similar conclusions in practice, and also that longtermism is sometimes framed in a way that is more callous/dismissive than I’m suggesting here.
This can happen unconsciously, though, e.g. confirmation bias, or whenever there’s arbitrariness or “whim”, e.g. priors or how you weight different considerations with little evidence. The weaker the evidence, the more prone to bias, and there’s self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly. (EDIT: see the optimizer’s curse.) This is basically something Greaves and MacAskill acknowledge in their paper, although also argue applies to short-term-focused interventions:
That being said, I suspect it’s possible in practice to hedge against these indirect effects from short-term-focused interventions.
I haven’t read your post, so can’t comment.
That said, FWIW, my independent impression is that “cluelessness” isn’t a useful concept and that the common ways the concept has been used either to counter neartermism or counter longtermism are misguided. (I write about this here and here.) So I guess that that’s probably consistent with your conclusion, though maybe by a different road. (I prefer to use the sort of analysis in Tarsney’s epistemic challenge paper, and I think that that pushes in favour of either longtermism or further research on longtermism vs neartermism, though I definitely acknowledge room for debate on that.)
I think Tarsney’s paper does not address/avoid cluelessness, or at least its spirit, i.e., the arbitrary weighting of different considerations, since
You still need to find a specific intervention that you predict ex ante pushes you towards one attractor and away from another, and you have more reason to believe it does this than it goes in the opposite direction (in expectation, say). If you have more reason to believe this due to arbitrary weights, which could reasonably have been chosen to have the intervention backfire, this is not a good epistemic state to be in. For example, is the AI safety work we’re doing now backfiring? This could be due to, for example:
creating a false sense of security,
publishing the results of the GPT models, demonstrating AI capabilities and showing the world how much further we can already push it, and therefore accelerating AI development, or
slowing AI development more in countries that care more about safety than those that don’t care much, risking a much worse AGI takeover if it matters who builds it first.
You still need to predict which of the attractors is ex ante ethically better, which again involves both arbitrary empirical weights and arbitrary ethical weights (moral uncertainty). You might find the choice to be sensitive to something arbitrary that could reasonably go either way. Is extinction actually bad, considering the possibility of s-risks?
Does some s-risk (e.g. AI safety, authoritarianism) work reduce some extinction risks and so increase other s-risks, and how do we weigh those possibilities?
I worry that research on longtermism vs neartermism (like Tarsney’s paper) just ignores these problems, since you really need to deal with somewhat specific interventions, because of the different considerations involved. In my view, (strong) longtermism is only true if you actually identify an intervention that you can only reasonably believe does (much) more net good in the far future in expectation than short-term-focused alternatives do in the short term in expectation, or, roughly, that you can only reasonably believe does (much) more good than harm (in the far future) in expectation. This requires careful analysis of a specific intervention, and we may not have the right information now or ever to confirm that a particular intervention satisfies these conditions. To every longtermist intervention I’ve tried to come up with specific objections to, I’ve come up with objections that I think could reasonably push it into doing more harm than good in expectation.
Of course, what should “reasonable belief” mean? How do we decide which beliefs are reasonable and which ones aren’t (and the degree of reasonableness, if it’s a fuzzy concept)?
Basically, I agree that longtermist interventions could have these downside risks, but:
I think we should basically just factor that into their expected value (while using various best practices and avoiding naive approaches)
I do acknowledge that this is harder than that makes it sound, and that people often do a bad job. But...
I think that these same points also apply to neartermist interventions
Though with less uncertainty about at least the near-term effects, of course
I think this gets at part of what comes to mind when I hear objections like this.
Another part is: I think we could say all of that with regards to literally any decision—we’d often be less uncertain, and it might be less reasonable to think the decision would be net negative or astronomically so, but I think it just comes in degrees, rather than applying strongly to some scenarios and not at all applying to others. One way to put this is that I think basically every decision meets the criteria for complex cluelessness (as I argued in the above-mentioned links: here and here).
But really I think that (partly for that reason) we should just ditch the term “complex cluelessness” entirely, and think in terms of things like credal resilience, downside risk, skeptical priors, model uncertainty, model combination and adjustment, the optimizer’s curse, best practice for forecasting, and expected values given all that.
Here I acknowledge that I’m making some epistemological, empirical, decision-theoretic, and/or moral claims/assumptions that I’m aware various people who’ve thought about related topics would contest (including yourself and maybe Greaves, both of whom have clearly “done your homework”). I’m also aware that I haven’t fully justified these stances here, but it seemed useful to gesture roughly at my conclusions and reasoning anyway.
I do think that these considerations mostly push against longtermism and in favour of neartermism. (Caveats include things like being very morally uncertain, such that e.g. reducing poverty or reducing factory farming could easily be bad, such that maybe the best thing is to maintain option value and maximise the chance of a long reflection. But this also reduces option value in some ways. And then one can counter that point, and so on.) But I think we should see this all as a bunch of competing quantitative factors, rather than as absolutes and binaries.
(Also, as noted elsewhere, I currently think longtermism—or further research on whether to be longtermist—comes out ahead of neartermism, all-things-considered, but I’m unsure on that.)
I don’t think it’s usually reasonable to choose only one expected value estimate, though, and this to me is the main consequence of cluelessness. Doing your best will still leave a great deal of ambiguity if you’re being honest about what beliefs you think would be reasonable to have, despite not being your own fairly arbitrary best guess (often I don’t even have a best guess, precisely because of how arbitrary that seems). Sensitivity analysis seems important.
I would say complex cluelessness basically is just sensitivity of recommendations to model uncertainty. The problem is that it’s often too arbitrary to come to a single estimate by combining models. Two people with access to all of the same information and even the same ethical views (same fundamental moral uncertainty and methods for dealing with them) could still disagree about whether an intervention is good or bad, or which of two interventions is best, depending basically on whims (priors, arbitrary weightings).
At least substantial parts of our credences are not very sensitive to arbitrariness with shorttermist interventions with good evidence, even if on the whole the expected value is, but the latter is what I hope hedging could be used to control. Maybe you can do this just with longtermist interventions, though. A portfolio of interventions can be less ambiguous than each intervention in it. (This is what my hedging post is about.)
tl;dr: I basically agree with your first paragraph, but think that:
that’s mostly consistent with my prior comment
that doesn’t represent a strong argument against longtermism
Masrani’s claims/language go beyond the defensible claims you’re making
Agreed. But:
I think that a small to moderate degree of such bias is something I acknowledged in my prior comment
(And I intended to imply that it could occur unconsciously, though I didn’t explicitly state that)
I think unconscious bias is always a possibility, including in relation to whatever alternative to longtermism one might endorse
See also Caution on Bias Arguments and Beware Isolated Demands for Rigor
That said, I think “The weaker the evidence, the more prone to bias” is true (all other factors held constant), and I think that that does create one reason why bias may push in favour of longtermism more than in favour of other things.
I think I probably should’ve acknowledged that.
But there’s still the fact that there are so many other sources of bias, factors exacerbating or mitigating bias, etc. So it’s still far from obvious which group of people (sorted by current cause priorities) is more biased overall in their cause prioritisation.
And I think that there’s some value in trying to figure that out, but that should be done and discussed very carefully, and is probably less useful than other discussions/research that could inform cause priorities.
E.g., scope neglect, identifiable victim effects, and confirmation bias when most people first enter EA (since more were previously interested in global health & dev than in longtermism) bias against longtermism
But a desire to conform to what’s currently probably more “trendy” in EA biases towards longtermism
And so on
Less important: It seems far from obvious to me whether there’s substantial truth in the claim that “there’s self-selection so that the people most interested in longtermism are the ones whose arbitrary priors and weights support it most, rightly or wrongly”, even assuming bias is a big part of the story.
E.g., I think things along the lines of conformity and deference are more likely culprits for “unwarranted/unjustified” shifts towards longtermism than confirmation bias are
It seems like a very large portion of longtermists were originally focused on other areas and were surprised to find themselves ending up longtermist, which makes confirmation bias seem like an unlikely explanation
Compared to what you’re suggesting, Masrani—at least in some places—seems to imply something more extreme, more conscious, and/or more explicitly permitted by longtermism itself (rather than just general biases that are exacerbated by having limited info)
E.g., “by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever.” [emphasis added]
E.g., “To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats. In other words, the entirely subjective and non-falsifiable belief that one’s actions aren’t directly contributing to existential risks gives one carte blanche permission to treat others however one pleases. The suffering of our fellow humans alive today is inconsequential in the grand scheme of things. We can “simply ignore” it—even contribute to it if we wish—because it doesn’t matter.” [emphasis added]
This very much sounds to me like “assuming bad faith” in a way that I think is both unproductive and inaccurate for most actual longtermists
I.e., this sounds quite different to “These people are really trying to do what’s best. But they’re subject to cognitive biases and are disproportionately affected by the beliefs of the people they happen to be around or look up to—as are we all. And there are X, Y, Z specific reasons to think those effects are leading these people to be more inclined towards longtermism than they should be.”