Research Associate at the Global Priorities Institute.
Slightly less ignorant about economic theory than about everything else
Research Associate at the Global Priorities Institute.
Slightly less ignorant about economic theory than about everything else
Sorry, no, that’s clear! I should have noted that you say that too.
The point I wanted to make is that your reason for saving as an urgent longtermist isn’t necessarily something like “we’re already making use of all these urgent opportunities now, so might as well build up a buffer in case the money is gone later”. You could just think that now isn’t a particularly promising time to spend, period, but that there will be promising opportunities later this century, and still be classified as an urgent longtermist.
That is, an urgent longtermist could have stereotypically “patient longtermist” beliefs about the quality of direct-impact spending opportunities available in December 2020.
Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over “patient vs urgent longtermism” and the debate over giving now vs later, and I agree that it’s not as straightforward as people sometimes think.
On the one hand, as you point out, one could be a “patient longtermist” but still think that there are capacity-building sorts of spending opportunities worth funding now.
But I’d also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it’s worth investing now, so that more money will be spent near those junctures in a few decades. Investing to give in, say, thirty years is still pretty unusual behavior, at least for small donors, but totally compatible with “urgent longtermism” / “hinge of history”-type views as they’re usually defined.
Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I’m understanding your original point about “even the largest nation-states being only a small fraction of the world”), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
That’s not a very firm belief on my part—I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I’d be surprised if the latter were approximately none of the problem.
I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:
long-term (but people just care about the short term, and coordination with future generations is impossible), and
global (but governments just care about their own countries, and we don’t do global coordination well).
So I definitely agree that it’s important that there are many actors in the world who aren’t coordinating well, and that accounting for this would be an important next step.
But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.
Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.
The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.
In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )
Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.
Sorry—maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?
Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there’s surprisingly little really being done.
One point I’d like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.
In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.
I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which you’re considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a “naïve” government’s approach to prioritizing between, say, increasing this year’s GDP and decreasing this year’s carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean
less consumption this year,
less climate damage next year,
less accumulated capital next year with which to mitigate climate damage,
more of an incentive for people next year to allow more emissions,
more predictable weather and therefore easier production next year,
…but this might mean more (or less) emissions next year,
…and so on.
It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an “integrated assessment model”. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if you’re, say, a government in 2020, you just solve for the policy—the level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to consider—that maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But I’d say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.
If the project of “writing down stylized models of the world and solving for the optimal thing for EAs to do in them” counts as cause prioritization, I’d say two projects I’ve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrenner’s paper on existential risk and growth. Anyway, I don’t mean to plug these projects in particular, I just want to make the case that they’re examples of a class of work that is being done to some extent and that should count as prioritization research.
…And examples of what GPI will hopefully soon be fostering more of, for whatever that’s worth! It’s all philosophy so far, I know, but my paper and Leo’s are going on the GPI website once they’re just a bit more polished. And we’ve just hired two econ postdocs I’m really excited about, so we’ll see what they come up with.
Hanson has advocated for investing for future giving, and I don’t doubt he had this intuition in mind. But I’m actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries’ pure time preference. I only know that he’s said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?
Also, who made the “pure time preference in the interest rate means patient philanthropists should invest” point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)
That post just makes the claim that “all we really need are positive interest rates”. My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.
Hanson’s post then says something which sounds kind of like my point, namely that we can infer that it’s better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.
Could you elaborate?
The GWWC Further Pledge
One Richard Chappell has a response here: https://www.philosophyetc.net/2020/03/no-utility-cascades.html
In case the notation out of context isn’t clear to some forum readers: Sensitivity S is the extent to which the earth will warm given a doubling of CO2 in the atmosphere. K denotes degrees Kelvin, which have the same units as degrees Celsius.
I don’t know what counts as a core principle of EA exactly, but most people involved with EA are quite consequentialist.
Whatever you should in fact do here, you probably wouldn’t find a public recommendation to be dishonest. On purely consequentialist grounds, after accounting for the value of the reputation of the EA community and so on, what community guidelines (and what EA Forum advice) do you think would be better to write: those that go out of their way to emphasize honesty or those that sound more consequentialist?
I’m just putting numbers to the previous sentence: “Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing.”
If “most” means “80%” there, then halting growth would lower the hazard rate from 1% to 0.8%.
Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on “experiments that might go wrong” at t. The model is indeed a simplification in this way.
Just to make sure something’s clear, though (and sorry if this was already clear): Toby’s 20% hazard rate isn’t the current hazard rate; it’s the hazard rate this century, but most of that is due to developments he projects occurring later this century. Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing. So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.
This paper is also relevant to the EA implications of a variety of person-affecting views. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Teruji_Thomas_asymmetry_uncertainty.pdf
Glad you liked it, and thanks for the good questions!
#1: I should definitely have spent more time on this / been more careful explaining it. Yes, x-risks should “feed straight into interest rates”, in the sense that a +1% chance of an x-risk per year should mean a 1% higher interest rate. So if you’re going to be
spending on something other than x-risk reduction; or
spending on x-risk reduction but only able to marginally lower the risk in the period you’re spending (i.e. not permanently lower the rate), and think that there will still be similar risk to mitigate in the next period conditional on survival,
then you should be roughly compensated for the risk. That is, under those circumstances, if investing seemed preferable to spending in the absence of the heightened risk, it should still seem that way given the heightened risk. This does all hold despite the fact that the heightened risk would give humanity such a short life expectancy.
But I totally grant that these assumptions may not hold, and that if they don’t, the heightened risk can be a reason to spend more! I just wanted to point out that there is this force pushing the other way that turns out to render the question at least ambiguous.
#2: No, there’s no reductio here. Once you get big enough, i.e. are no longer a marginal contributor to the public goods you’re looking to fund, the diminishing returns to spending make it less worthwhile to grow even bigger. (E.g., in the human consumption case, you’ll eventually be rich enough that spending the first half of your fund would make people richer to the point that spending the second half would do substantially less for them.) Once the gains from further investing fallen to the point that they just balance the (extinction / expropriation / etc) risks, you should start spending, and continue to split between spending and investment so as to stay permanently on the path where you’re indifferent between the two.
If you’re looking to fund some narrow thing only one other person’s interested in funding, and you’re perfectly patient but the other person is about as impatient as people tend to be, and if you start out with funds the same size, I think you’ll be big enough that it’s worth starting to spend after about fifty years. If you’re looking to spend on increasing human consumption in general, you’ll have to hold out till you’re a big fraction of global wealth—maybe on the order of a thousand years. (Note that this means that you’d probably never make it, even though this is still the expected-welfare-maximizing policy.)
#3: Yes. If ethics turns out to contain pure time preference after all, or we have sufficiently weak duties to future generations for some other reason, then patient philanthropy is a bad idea. :(
I think this is a valuable contribution—thanks for writing it! Among other things, it demonstrates that conclusions about when to give are highly sensitive to how we model value drift.
In my own work on the timing of giving, I’ve been thinking about value drift as a simple increase to the discount rate: each year philanthropists (or their heirs) face some x% chance of running off with the money and spending it on worthless things. So if the discount rate would have been d% without any value drift risk, it just rises to (d+x)% given the value drift risk. If the learning that will take place over the next year (and other reasons to wait, e.g. a positive interest rate) outweigh this (d+x)% (plus the other reasons why resources will be less valuable next year), it’s better to wait. But here we see that, if values definitely change a little each year, it might be best to spend much more quickly than if (as I’ve been assuming) they probably don’t change at all but might change a lot, since in the former case, holding onto resources allows for a kind of slippery slope in which each year you change your judgments about whether or not to defer to the next year. So I’m really glad this was written and I look forward to thinking about it more.
One comment on the thesis itself: I think it’s a bit confusing at the beginning, where it says that decision-makers face a tradeoff between “what is objectively known about the world and what they personally believe is true.” The tradeoff they face is between acquiring information and maintaining fidelity to their current preferences, not to their current beliefs. The rest of the thesis is consistent with framing the problem as a information-vs.-preference-fidelity tradeoff, so I think this wording is just a holdover from a previous version of the thesis which framed things differently. But (Max) let me know if I’m missing something.