I don’t see how Thorstad’s claim that the Space Guard Survey is a “special case” of a strong longtermist priority being reasonable (and that other longtermist proposals did not have the same justification) is “rebutted” by the fact that Greaves and McAskill use the Space Guard Survey as its example. The actual scope of longtermism is clearly not restricted to observing exogenous risks with predictable regularity and identifiable and sustainable solutions, and thus is subject at least to some extent to the critiques Thorstad identified.
Even the case for the Space Guard Survey looks a lot weaker than Thorstad granted if one considers that the x-risk from AI in the near term is fairly significant, which most longtermists seem to agree with. Suddenly instead of it having favourable odds of enabling a vast future, it simply observes asteroids[1] for three decades before AI becomes so powerful that human ability to observe asteroids is irrelevant, and any positive value it supplies is plausibly swamped by alternatives like researching AI that doesn’t need big telescopes to predict asteroid trajectories and can prevent unfriendly AI and other x-risks. The problem is of course, that we don’t know what that best case solution looks like[2] and most longtermists think many areas of spending on AI look harmful rather than near best case, but don’t high certainty (or any consensus) about which areas those are. Which is Thorstad’s ‘washing out’ argument
As far as I can see, Thorstad’s core argument is that even if it’s [trivially] true that the theoretical best possible course of action has most of its consequences in the future, we don’t know what that course of action is or even near best solutions are. Given that most longtermists don’t think the canonical asteroid example is the best possible course of action and there’s widespread disagreement over whether actions like accelerating “safe” AI research are increasing or reducing risk, I don’t see his concession the Space Guard Survey might have merit under some assumptions as undermining that.
- ^
ex post, we know that so far it’s observed asteroids that haven’t hit us and won’t in the foreseen future.
- ^
in theory it could even involve saving a child who grows up to be an AI researcher from malaria. This is improbable, but when you’re dealing with unpredictable phenomena with astronomical payoffs...
By “scope of longtermism” I took Thorstad’s reference to “class of decision situations” in terms of permutations to be evaluated (maximising welfare, maximising human proliferation, minimising suffering etc) rather than categories of basic actions (spending, voting, selecting clothing).[1] I’m not actually sure it makes a difference to my interpretation of the thrust of his argument (diminution, washing out and unawareness means solutions whose far future impact swamps short term benefits are vanishingly rare and generally unknowable) either way.
Sure, Thorstad absolutely starts off by conceding that under certain assumptions about the long term future,[2] a low probability but robustly positive action like preparing to stop asteroids from hitting earth which indirectly enables benefits to accrue over the very long term absolutely can be a valid priority.[3] But it doesn’t follow that one should prioritise the long term future in every decision making situation in which money is given away. The funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met[4], and his core argument is we’re otherwise almost always clueless about what the [near] best solution for the long term future is. It’s not a particularly good heuristic to focus spending on outcomes you are most likely to be clueless about, and a standard approach to accumulation of uncertainty is to discount for it, which of course privileges the short term.
I mean, I agree that Thorstad makes no dent in arguments to the effect that if there is an action which leads to positive utility sustained over a very long period of time for a very large number of people it will result in very high utility relative to actions which don’t have that impact: I’m not sure that argument is even falsifiable within a total utilitarian framework.[5] But I don’t think his intention is to argue with [near] tautologies, so much as to insist that the set of decisions which credibly result in robustly positive long term impact is small enough to usually be irrelevant.
all of which can be reframed in terms of making money to spend available to spend on priorities” in classic “hardcore EA” style anyway...
Some of the implicit assumptions behind the salience of asteroid x-risk aren’t robust: if AI doomers are right then that massive positive future we’re trying to protect looks a lot smaller. On the other hand compared with almost any other x-risk scenario, asteroids are straightforward: we don’t have to factor in the possibility of asteroids becoming sneaky in response to us monitoring them, or attach much weight to the idea that informing people about asteroids will motivate them to try harder to make it hit the earth.
you correctly point out his choice of asteroid monitoring service is different from Greaves and MacAskill’s. I assume he does so partly to steelman the original, as the counterfactual impact of a government agency incubating the first large-scale asteroid monitoring programme is more robust than that of the marginal donation to NGOs providing additional analysis. And he doesn’t make this point, but I doubt the arguments that decided its funding actually depended on the very long term anyway....
this is possibly another reason for his choice of asteroid monitoring service...
Likewise, pretty much anyone familiar with total utilitarianism can conceive a credible scenario in which the highest total utility outcome would be to murder a particular individual (baby Hitler etc), and I don’t think it would be credible to insist such a situation could never occur or never be known. This would not, however, fatally weaken arguments against the principle of “murderism” that focused on doubting there were many decision situations where murder should be considered as a priority