Econ PhD student at Oxford and research associate at the Global Priorities Institute. I’m slightly less ignorant about economic theory than about everything else.
trammell
I disagree with the common framing that saving lives and so on constitute one straightforward, unambiguous way to do good, and that longtermism just constitutes or motivates some interventions with the potential to do even more good.
It seems to me (and I’m not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous. In any event, it renders the magnitude of their value more ambiguous than it is if one disregards flow-through effects of all kinds. If
accounting for long term consequences lowers the expected value (or whatever analog of expected value we use in the absence of precise expectations) of classic EA interventions, in someone’s mind, and
she’s not persuaded that any other interventions—or, any she can perform—offer as high (quasi-)expected value, all things considered, as the classic EA interventions offer after disregarding flow-through effects,
then I think it’s reasonable for her to feel less happy about how much good she can do as she becomes more concerned about the long term.
For the record, I don’t know how common this feeling is, or how often people feel more excited about their ability to save lives and so on than they did a few years ago. One could certainly think that saving lives, say, has even more long-term net positive effects than short-term positive effects. I just want to say that when someone says that they feel less excited about how much good they can do, and that longtermism has something to do with that, that could be justified. They might just be realizing that doing good isn’t and never was as good as they thought it was.
Thanks for pointing that out!
For those who might worry that you’re being hyperbolic, I’d say that the linked paper doesn’t say that they are white supremacists. But it does claim that a major claim from Nick Beckstead’s thesis is white supremacist. Here is the relevant quote, from pages 27-28:
“As he [Beckstead] makes the point,
>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
This is overtly white-supremacist.”
The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn’t seem like a very helpful contribution to the discussion.
Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios—along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.
That said, there is one broad limitation to this analysis that hasn’t gotten quite as much attention so far as I think it deserves. (Basil: yes, this is the thing we discussed last summer….) This is that low real, risk-free interest rates are compatible with the belief
1) that there will be no AI-driven growth explosion,
as you discuss—but also with some AI-growth-explosion-compatible beliefs investors might have, including
2) that future growth could well be very fast or very slow, and
3) that growth will be fast but marginal utility in consumption will nevertheless stay high, because AI will give us such mindblowing new things to spend on (my “new products” hobby-horse).
So it seems impossible to put any upper bound (below 100%) on the probability people are assigning to near-term explosive growth purely by looking at real, risk-free interest rates.To infer that investors believe (1), one of course has to think hard about all the alternatives (including but not limited to (2) and (3)) and rule them out. But (if I’m not mistaken) all you do along these lines is to partly rule out (2), by exploring the implications of putting a yearly probability on the economy permanently stagnating. I found that helpful. As you observe, merely (though I understand that you don’t see it as “merely”!) introducing a 20% chance of stagnation by 2053 is enough to mostly offset the interest rate increases produced by an 80% chance of Cotra AI timelines. You don’t currently incorporate any negative-growth scenarios, but even a small chance of negative growth seems like it should be enough to fully offset said interest rate increase. This is because of the asymmetry produced by diminishing marginal utility: the marginal utility of an extra dollar saved can only fall to zero, if you turn out to be very rich in the future, whereas it can rise arbitrarily high if you turn out to be very poor. (You note this when you say “the real interest rate reflects the expected future economic growth rate, where importantly the expectation is taken over the risk-neutral measure”, but I think the departure from caring about what we would normally call the expected growth rate is important and kind of obscured here.)
This seems especially relevant given that what investors should be expected to care about is the expected growth rate of their own future consumption, rather than of GDP. Even if they’re certain that AI is coming and bound to accelerate GDP growth, they could worry that it stands some chance of making a small handful of people rich and themselves poor. You write that “truly transformative AI leading to 30%+ economy-wide growth… would not be possible without having economy-wide benefits”, but this is not so clear to me. You might think that’s crazy, but given that I don’t, presumably some other investors don’t.
Anyway: this is all to say that I’m skeptical of inferring much from risk-free interest rates alone. This doesn’t mean we can’t draw inferences from market data, though! For one thing, on the hypothesis that investors believe “(2)”, we would probably expect to see the “insurance value” of bonds, and thus the equity premium, rising over time (as we do, albeit weakly). For another thing, one can presumably test how the market reacts to AI news. I’m certainly interested to see any further work people do in this direction.
In case the notation out of context isn’t clear to some forum readers: Sensitivity S is the extent to which the earth will warm given a doubling of CO2 in the atmosphere. K denotes degrees Kelvin, which have the same units as degrees Celsius.
Thank you, I’m flattered! But remember, all: Will MacAskill saying we have good arguments doesn’t necessarily mean we have good arguments :)
Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).
Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.
I agree that it’s totally plausible that, once all the considerations are properly analyzed, we’ll wind up vindicating the existential risk view as a simplification of “maximize utility”. But in the meantime, unless one is very confident or thinks doom is very near, “properly analyze the considerations” strikes me as a better simplification of “maximize utility”.
As the one who supervised him, I too think it’s a super exciting and useful piece of research! :)
I also like that its setup suggests a number of relatively straightforward extensions for other people to work on. Three examples:
Comparing (1) the value of an increase to B (e.g. a philanthropist investing / subsidizing investment in safety research) and (2) the value of improved international coordination (moving to the “global impatient optimum” from a “decentralized allocation” of x-risk mitigation spending at, say, the country level) to (3) a shock to growth and (4) a shock to the “rate of pure time preference” on which society chooses to invest in safety technology. (The paper currently just compares (3) and (4).)
Seeing what happens when you replace the N^(epsilon—beta) term in the hazard function with population raised to a new exponent, say N^(mu), to allow for some risky activities and/or safety measures whose contribution to existential risk depends not on the total spent on them but on the amount per capita spent on them, or something in between.
Seeing what happens when you use a different growth model—in particular, one that doesn’t depend on population growth.
I expect that different people at GPI have somewhat different goals for their own research, and that this varies a fair bit between philosophy and economics. But for my part,
my primarily goal is to do research that philanthropists find useful, and
my secondary goal is to do research that persuades other academics to see certain important questions in a more “EA” way, and to adjust their own curricula and research accordingly.
On the first point—and apologies if this sounds self-congratulatory or something, but I’m just providing the examples of GPI’s impact that I happen to have had a hand in, in case they’re helpful!—I’m (naturally) excited that my work on the allocation of philanthropic spending over time motivated Founders Pledge to launch the Patient Philanthropy Fund. I’m also glad that a few larger philanthropists have told me that it has had at least some impact on how they think about the question of how they should distribute their giving over time.
On the second point, I don’t really expect to be influencing econ professors much yet since I’m still just a PhD student, but my literature review on economic growth under AI will be used in a Coursera course on the economics of AI. (To illustrate what I have in mind of what’s possible, though, the philosophers already seem to have had a fair bit of success influencing curricula: professors at Yale and UMich are now offering whole courses on longtermism, largely drawing on GPI papers.)
I am not focused on attempting to change policy.
Sure. Those particular papers rely on a mathematical trick that only lets you work out how much a society should be willing to pay to avoid proportional losses in consumption. It turns out to be different from what to do in the x-risk case in lots of important ways, and the trick is not generalizable in those ways. But because the papers seem so close to being x-risk-relevant, I know of like half a dozen EA econ students (including me) who have tried extending them at some point before giving up…
I’m aware of at least a few other “common EA econ theorist dead ends” of this sort, and I’ll try making a list, along something written about each of them. When this and the rest of the course material is done, I’ll post it.
Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over “patient vs urgent longtermism” and the debate over giving now vs later, and I agree that it’s not as straightforward as people sometimes think.
On the one hand, as you point out, one could be a “patient longtermist” but still think that there are capacity-building sorts of spending opportunities worth funding now.
But I’d also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it’s worth investing now, so that more money will be spent near those junctures in a few decades. Investing to give in, say, thirty years is still pretty unusual behavior, at least for small donors, but totally compatible with “urgent longtermism” / “hinge of history”-type views as they’re usually defined.
Thanks for writing this! For all the discussion that population growth/decline has gotten recently in EA(/-adjacent) circles, as a potential top cause area—to the point of PWI being founded and Elon Musk going on about it—there hasn’t been much in-depth assessment of the case for it, and I think this goes a fair way toward filling that gap.
One comment: you write that “[f]or a rebound [in population growth] to happen, we would only need a single human group satisfying the following two conditions: long-run above-replacement fertility, and a high enough “retention rate”, that is, a large enough fraction of the descendants of this group continues to belong to the group.” I think that’s a good and underappreciated point, but I also think it’s a bit weaker than it sounds at first, since something of a converse also holds. I.e. for permanent population decline to happen, we would only need a single human group satisfying the following two conditions: long-run below-replacement fertility, and a high enough “attraction rate”, that is, a large enough fraction of people born outside the group continue to join the group. “Western civilization” has arguably been such a group for the last few generations, and it’s not obvious to me that it (or its “descendants”) won’t continue to be for a very long time.
There are now questions on Metaculus about whether this will pass:
https://www.metaculus.com/questions/8663/us-to-make-patient-philanthropy-harder-soon/https://www.metaculus.com/questions/8664/patient-philanthropy-harder-in-the-us-by-30/
I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:
long-term (but people just care about the short term, and coordination with future generations is impossible), and
global (but governments just care about their own countries, and we don’t do global coordination well).
So I definitely agree that it’s important that there are many actors in the world who aren’t coordinating well, and that accounting for this would be an important next step.
But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
By the way, someone wrote this Google doc in 2019 on “Stock Market prediction of transformative technology”. I haven’t taken a look at it in years, and neither has the author, so understandably enough, they’re asking to remain nameless to avoid possible embarrassment. But hopefully it’s at least somewhat relevant, in case anyone’s interested.
Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I’m understanding your original point about “even the largest nation-states being only a small fraction of the world”), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
That’s not a very firm belief on my part—I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I’d be surprised if the latter were approximately none of the problem.
Still no summary of the paper as a whole, but if you’re interested, I just wrote a really quick blog post which summarizes one takeaway. https://philiptrammell.com/blog/45
Hi, sorry for the late reply—just got back from vacation.
As with most long posts, I expect this post has whatever popularity it has not because many people read it all, but because they skimmed parts and thought they made sense, and thought the overall message resonated with their own intuitions. Likewise, I expect your comment has whatever popularity it has because they have different intuitions, and because it looks on a skim as though you’ve shown that a careful reading of the post validates those intuitions instead…! But who knows.
Since there are hard-to-quantify considerations both for and against philanthropists being very financially risk tolerant, if your intuitions tend to put more weight on the considerations that point in the pro-risk-tolerance direction, you can certainly read the post and still conclude that a lot of risk tolerance is warranted. E.g. my intuition differs from yours at the top of this comment. As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.
Beyond an intuition-based re-weighting of the considerations, though, you raise questions about the qualitative validity of some of the points I raise. And as long as your comment is, I think the post does already address essentially all these questions. (Indeed, addressing them in advance is largely why the post is as long as it is!) For example, regarding “arguments from uncertainty”, you say
I don’t see how this could flatten out the utility function. This should be in “Justifying a more cautious portfolio”.
But to my mind, the way this flattening could work is explained in the “Arguments from uncertainty” section:
“one might argue that philanthropists have a hard time distinguishing between the value of different projects, and that this makes the “ex ante philanthropic utility function”, the function from spending to expected impact, less curved than it would be under more complete information…”
Or, in response to my point that “The philanthropic utility function for any given “cause” could exhibit more or less curvature than a typical individual utility function”, you say
I don’t find any argument convincing that philanthropic utility functions are more curved than typical individuals. (As I’ve noted above where you’ve attempted to argue this. This should be in “Justifying a riskier portfolio”, .
Could you point me to what you’re referring to, when you say you note this above? To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.
So I can better understand what might be going on with all these evident failures of communication on my end more generally, instead of producing an ever-lengthening series of point by point replies, could you say more about why you don’t feel your questions are answered in these cases?
Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there’s surprisingly little really being done.
One point I’d like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.
In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.
I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which you’re considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a “naïve” government’s approach to prioritizing between, say, increasing this year’s GDP and decreasing this year’s carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean
less consumption this year,
less climate damage next year,
less accumulated capital next year with which to mitigate climate damage,
more of an incentive for people next year to allow more emissions,
more predictable weather and therefore easier production next year,
…but this might mean more (or less) emissions next year,
…and so on.
It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an “integrated assessment model”. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if you’re, say, a government in 2020, you just solve for the policy—the level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to consider—that maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But I’d say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.
If the project of “writing down stylized models of the world and solving for the optimal thing for EAs to do in them” counts as cause prioritization, I’d say two projects I’ve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrenner’s paper on existential risk and growth. Anyway, I don’t mean to plug these projects in particular, I just want to make the case that they’re examples of a class of work that is being done to some extent and that should count as prioritization research.
…And examples of what GPI will hopefully soon be fostering more of, for whatever that’s worth! It’s all philosophy so far, I know, but my paper and Leo’s are going on the GPI website once they’re just a bit more polished. And we’ve just hired two econ postdocs I’m really excited about, so we’ll see what they come up with.