Postdoc at the Digital Economy Lab, Stanford. I’m slightly less ignorant about economic theory than about everything else.
trammell
I think it depends on the time horizon. If catch-up growth is not near-guaranteed in 100 years, I think waiting 100 years is probably better than spending now. If it is near-guaranteed, I think that the case for waiting 100 years ambiguous, but there is some longer period of time which would be better.
I don’t think Option A is available in practice: I think the recipients will tend save too little of the money. That’s the primary argument by which I have argued for Option B over giving now (see e.g. here).
But with all respect, it seems to me that you got a bit confused a few comments back about how to frame the question of when it’s best to spend on an effort to spur catch-up growth, and when that was made clear, instead of acknowledging it, you’ve kept trying to turn the subject to the question of when to give more generally. Maybe that’s not how you see it, but given that that’s how it seems to me, I hope it’s understandable if I say I find it frustrating and would rather not continue to engage.
No: I think that people should delay spending on global poverty/health on the current margin, not that optimal total global poverty/health spending today would be 0.
But that’s a big question, and I thought we were just trying to make progress on it by focusing one one narrow angle here: namely whether or not it is in some sense “at least 1,000x better to stimulate faster economic growth in the poorest countries today than it is to do it 100 years from now”. I think that, conditional on a country not having caught up in 100 years, there’s a decent chance it will still not have caught up in 200 years; and that in this case, when one thinks it through, initiating catch-up in 100 years is at least half as good as doing so today, more or less.
The returns certainly aren’t all that matter.
I don’t follow your questions. We’re comparing spending now to induce some chance of growth starting now with spending later to induce some chance of growth starting later, right? To make the scenario precise, say
The country is currently stagnant, and its people collectively enjoy “1 util per year”. Absent your intervention, it will stay stagnant for 200y.
Spending $1m now has a 1% chance of kicking off catch-up growth.
Investing it for 100y before spending has a 4% chance of kicking off catch-up growth then (because $868m>>$1m). The money won’t be lost in the meantime (or, we can say that the chance it gets lost is incorporated into the 4%).
In either case, the catch-up will be immediate and bring them to a state where they permanently collectively enjoy “2 utils per year”.
In this case, the expected utility produced by spending now is 1%x(2-1)x200 = 2 utils.
The expected utility produced by spending in 100y is 4%x(2-1)x100 = 4 utils.The gap can be arbitrarily large if we imagine that the default is stagnation for a longer period of time than 200y (or arbitrarily negative if we imagine that it was close to 100y), and this is true regardless of how much money the beneficiaries wind up with (due to the growth) is producing the gap between the 2 utils and the 1 util.
That depends on how long it would have stayed poor without the intervention!
I think the case for waiting is stronger, not weaker, if you think the chance that poor countries won’t have exhibited catch-up growth by 2126 is non-negligible. If they haven’t exhibited catch-up growth by 2126, I expect $868 million then is much more likely to trigger it than $1 million today.
I don’t think trying to invest for a long time is obviously a silly strategy. But I agree that people or groups of people should decide for themselves whether they want to try to do that with their money, and a charity fundraising this year would be betraying their donors’ trust if their plan was actually to invest it for a long time.
Thanks for sharing! I wasn’t aware of the case for thinking that the right hemisphere has so much less welfare capacity than the left. If this is true, it leaves me thinking that the sum of the welfare capacities of the two parts of the split brain patient is significantly less than 1, rather than the 0.99 I went with in the example.
It’s interesting that this EA Forum post forms so much of the basis for its answer to the second question. I wonder if it’s because so little has been written on this, or just because the way you asked the question used language especially similar to this post.
Even though the Gemini report seems to represent my view (what it calls the “divisive model”) and Fischer’s view (the “additive model”) well at first, it gets pretty confused in a few places:
In Section 3.1, it defines hedonistic welfare per unit time in precisely the way I’m arguing against, where there’s no “size” dimension and all it means to be “a being with a higher [welfare] capacity” is that you “can experience ‘deeper’ pain or ‘higher’ pleasure”.
It says that the divisive model is captured by the quote
“The view that expanding a mind from one hemisphere to two… would increase its welfare capacity by much less than 100%—indeed, only by something like 1%.”
It then says it
reject[s] the “Strict Divisive” model because pain does not dilute with volume,
(and also rejects the “Strict Additive” model), and goes with what it frames as something in the middle, but which is fully the additive model after adjusting for the fact that, in its view, the right hemisphere lacks various capacities. -- But, despite the counterintuitive terminology it’s chosen, it’s the Additive view, not the Divisive view, on which “pain dilutes with volume”. The Additive view says that total pain falls if you reconnect the two hemispheres of a split-brain patient in the ice bath, because a welfare subject’s pain is something like an average of pain across the phenomenal field rather than a sum.
I agree that the question of when to give is very important, and that it’s often underappreciated how strong of a reason compound interest is for giving later. This seems like a subject people in EA rediscover every few years and then largely forget about—it’s a shame that the intricacies of the arguments back and forth get lost in the process, but good to see that people stay interested in thinking this through.
If it’s helpful at all, here’s a relatively comprehensive writeup of my own thoughts on the subject from three years ago (and an even older talk and podcast, for the more audio-visually inclined; and a fund some people at Founders Pledge set up for those interested in committing to long-term saving). You can also find various objections to (and elaborations on) this material from others on the EA Forum at that time, many of them excellent. I do agree with the common response that the prospect of near-term transformative AI strengthens the case for giving sooner, though not quite as straightforwardly or extremely as it might seem at first, and in fact I’m currently in the middle of writing up some thoughts on this front. But in the meantime, let me just pitch checking out the old commentary. : )
A new way to estimate welfare growth
Thanks, that’s a great blog post and very relevant!
I don’t think I agree with Robin’s proposal that the main reason for all this product proliferation is that we want to have unique items—it seems to me that we have a lot of proliferation in domains where we aren’t particularly keen on expressing ourselves, like brands of pencils, consistent with the standard explanation that each entrepreneur needs a tiny bit of market power to profit from his/her innovation. But whatever the reason, I agree that Robin could well be right that in some sense we get way too much (trivially distinct) product variety by default.
It does :) This may also be relevant.
I don’t see the justification for donating the interest. If we think the marginal utility of the poor will fall more slowly than the interest rate, as it typically will if the poor (and others spending on them) discount the future and are aware of how much consumption they’ll have in the future, then it’s optimal to save everything including the interest, until the two are growing at the same rate.
Fair enough, I think the lack of a direct response has been due to an interaction between the two things. At first, people familiar with the existing arguments didn’t see much to respond to in David’s arguments, and figured most people would see through them. Later, when David’s arguments had gotten around more and it became clear that a response would be worthwhile (and for that matter when new arguments had been made which were genuinely novel), the small handful of people who had been exploring the case for longtermism had mostly moved on to other projects.
I would disagree a bit about why they moved on, though: my impression is that the bad association with FTX the word “longtermism” got was only slightly responsible for their shift in focus, and the main driver was just that faster-than-expected AI progress mostly convinced them that the most valuable philosophy work to be done was more directly AI-related.
Thanks for saying a bit more about how you’re interpreting “scope of longtermism”. To be as concrete as possible, what I’m assuming is that we both read Thorstad as saying “a philanthropist giving money away so as to maximize the good from a classical utilitarian perspective” is typically outside the scope of decision-situations that are longtermist, but let me know if you read him differently on that. (I think it’s helpful to focus on this case because it’s simple, and the one G&M most clearly argue is longtermist on the basis of those two premises.)
It’s a tautology that the G&M conclusion that the above decision-situation is longtermist follows from the premises, and no, I wouldn’t expect a paper disputing the conclusion to argue against this tautology. I would expect it to argue, directly or indirectly, against the premises. And you’ve done just that: you’ve offered two perfectly reasonable arguments for why the G&M premise (ii) might be false, i.e. giving to PS/B612F might not actually do 2x as much good in the long term as the GiveWell charity in the short term. (1) In footnote 2, you point out that the chance of near-term x-risk from AI may be very high. (2) You say that the funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met. You also suggest in footnote 3 that maybe NGOs will do a worse job of it than the government.
I won’t argue against any of these possibilities, since the topic of this particular comment thread is not how strong the case for longtermism is all things considered, but whether Thorstad’s “Scope of LTism” successfully responds to G&M’s argument. I really don’t think there’s much more to say. If there’s a place in “Scope of LTism” where Thorstad offers an argument against (i) or (ii), as you’ve done, I’m still not seeing it.
First, to clarify, Greaves and MacAskill don’t use the Spaceguard Survey as their example. They use giving to the Planetary Society or B612 Foundation as their example, which do similar work.
Could you spell out what you mean by “the actual scope of longtermism”? In everyday language this might sound like it means “the range of things it’s justifiable to work on for the sake of improving the long term”, or something like that, but that’s not what either Thorstad or Greaves and MacAskill mean by it. They mean [roughly; see G&M for the exact definition] the set of decision situations in which the overall best act does most of its good in the long term.
Long before either of these papers, people in EA (and of course elsewhere) had been making fuzzy arguments for and against propositions like “the best thing to do is to lower x-risk from AI because this will realize a vast and flourishing future”. The project G&M, DT, and other philosophers in this space were engaged in at the time was to go back and carefully, baby step by baby step, formalize the arguments that go into the various building blocks of these “the best thing to do is...” conclusions, so that it’s easier to identify which elements of the overall conclusion follow from which assumptions, how someone might agree with some elements but disagree with others, and so on. The “[scope of] longtermism” framing was deliberately defined broadly enough that it doesn’t make claims about what the best actions are: it includes the possibility that giving to the top GiveWell charity is the best act because of its long-term benefits (e.g. saving the life of a future AI safety researcher).
The Case offers a proof that if you accept the premises (i) giving to the top GiveWell charity is the way to do the most good in the short term and (ii) giving to PS/B612F does >2x more good [~all in the long term] than the GiveWell charity does in the short term, then you accept (iii) that the scope of longtermism includes every decision situation in which you’re giving money away. It also argues for premise (ii), semi-formally but not with anything like a proof.
Again, the whole point of doing these sorts of formalizations is that it helps to sharpen the debate: it shows that a response claiming that the scope of longtermism is actually narrow has to challenge one of those premises. All I’m pointing out is that Thorstad’s “Scope of Longtermism” doesn’t do that. You’ve done that here, which is great: maybe (ii) is false because giving to PS/B612F doesn’t actually do much good at all.
I’m not defending AI risk reduction, nor even longtermism. I’m arguing only that David Thorstad’s claim in “The Scope of Longtermism” was rebutted before it was written.
Almost all longtermists think that some interventions are better than asteroid monitoring. To be conservative and argue that longtermism is true even if one disagrees with the harder-to-quantify interventions most longtermists happen to favor, the Case uses an intervention with low but relatively precise impact, namely asteroid monitoring, and argues that it does more than 2x as much good in the long term as the top GiveWell charity does in the short term.
This is a non-sequitur. The “scope” which he claims is narrow is a scope of decision situations, and in every decision situation involving where to give a dollar, we can give it to asteroid monitoring efforts.
I know David well, and David, if you’re reading this, apologies if it comes across as a bit uncharitable. But as far as I’ve ever been able to see, every important argument he makes in any of his papers against longtermism or the astronomical value of x-risk reduction was refuted pretty unambiguously before it was written. An unfortunate feature of an objection that comes after its own rebuttal is that sometimes people familiar with the arguments will skim it and say “weird, nothing new here” and move on, and people encountering it for the first time will think no response has been made.
For example,[1] I think the standard response to his arguments in “The Scope of Longtermism” would just be the Greaves and MacAskill “Case for Strong Longtermism”.[2] The Case, in a nutshell, is that by giving to the Planetary Society or B612 Foundation to improve our asteroid/comet monitoring, we do more than 2x as much good in the long term, even on relatively low estimates of the value of the future, than giving to the top GiveWell charity does in the short term. So if you think GiveWell tells us the most cost-effective way to improve the short term, you have to think that, whenever your decision problem is “where to give a dollar”, the overall best action does more good in the long term than in the short term.
You can certainly disagree with this argument on various grounds—e.g. you can think that non-GiveWell charities do much more good in the short term, or that the value of preventing extinction by asteroid is negative, or for that matter that the Planetary Society or B612 Foundation will just steal the money—but not with the arguments David offers in “The Scope of Longtermism”.
His argument [again, in a nutshell] is that there are three common “scope-limiting phenomena”, i.e. phenomena that make it the case that the overall best action does more good in the long term than in the short term in relatively few decision situations. These are
rapid diminution (the positive impact of the action per unit time quickly falls to 0),
washing out (the long-term impact of the action has positive and negative features which are hard to predict and cancel out in expectation), and
option unawareness (there’s an action that would empirically have large long-term impact but we don’t know what it is).
He grants that when Congress was deciding what to do with the money that originally went into an asteroid monitoring program called the Spaceguard Survey, longtermism seems to have held. So he’s explicitly not relying on an argument that there isn’t much value to trying to prevent x-risk from asteroids. Nevertheless, he never addresses the natural follow-up regarding contributing to improved asteroid monitoring today.
Re (1), he cites Kelly (2019) and Sevilla (2021) as reasons to be skeptical of claims from the “persistence” literature about various distant cultural, technological, or military developments having had long-term effects on the arc of history. Granting this doesn’t affect the Case that whenever your decision problem is “where to give a dollar”, the overall best action does more good in the long term than in the short term.[3]
Re (2), he says that we often have only weak evidence about a given action’s impact on the long-term future. He defends this by pointing out (a) that attempts to forecast actions’ impacts on a >20 year timescale have a mixed track record, (b) that professional forecasters are often skeptical of the ability to make such forecasts, and (c) that the overall impact of an action on the value of world is typically composed of its impacts on various other variables (e.g. the number of people and how well-off they are), and since it’s hard to forecast any of these components, it’s typically even harder to forecast the action’s impact on value itself. None of this applies to the Case. We can grant that most actions have hard-to-predict long-term consequences, and that forecasters would recognize this, without denying that in most decision-situations (including all those where the question is where to give a dollar), there is one action that has long-term benefits more than 2x as great as the short-term benefits of giving to the top GiveWell charity: namely the action of giving to the Planetary Society or B612 Foundation. There is not a mixed track record of forecasting the >20 year impact of asteroid/comet monitoring, and no evidence that professional forecasters are skeptical of making such forecasts, and he implicitly grants that the complexity of forecasting its long-term impact on value isn’t an issue in this case when it comes to the Space Guard Survey.
Re (3), again, the claim the Case makes is that we have identified one such action.
- ^
I also emailed him about an objection to his “Existential Risk Pessimism and the Time of Perils” in November and followed up in February, but he’s responded only to say that he’s been too busy to consider it.
- ^
Which he cites! Note that Greaves and MacAskill defend a stronger view than the one I’m presenting here, in particular that all near-best actions do much more good in the long term than in the short term. But what David argues against is the weaker view I lay out here.
- ^
Incidentally, he cites the fact that “Hiroshima and Nagasaki returned to their pre-war population levels by the mid-1950s” as an especially striking illustration of lack of persistence. But as I mentioned to him at the time, it’s compatible with the possibility that those regions have some population path, and we “jumped back in time” on it, such that from now on the cities always have about as many people at t as they would have had at t+10. If so, bombing them could still have most of its effects in the future.
Thanks, I agree that when to spend remains an important and non-obvious question! I’m glad to see people engaging with it again, and I think a separate post is the place for that. I’ll check it out in the next few days.