Research manager at ICFG.eu, board member at Langsikt.no, doing policy research to mitigate risks from biotechnology and AI. Ex-SecureBio manager, ex-McKinsey Global Institute fellow and founder of the McKinsey Effective Altruism community. Follow me on Twitter at @jgraabak
Jakob
Also, @Joel Becker, at this point you have called my thinking “pretty tortured” twice (in comments to the original post) and “4D-chess” here. Especially the first phrase seems—at least to me—more like solider mindset than scout mindset, in that I don’t see how you’d make a discussion more truth-seeking, or enlighten anyone when using words like that.
I try to ask both “what does Joel know that I don’t” and “what do I know that Joel doesn’t, and how can I help him understand that”. This post is my attempt at engaging in that way. In contrast, I don’t see your comments offering much new evidence (e.g., in the comments to the original post you make comments such as “Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb”—which you should realize that I am well aware of. I am making my argument without that assumption, so you are only arguing against a straw man. So I will try to offer my explanation one more time, in the hopes that it could lead to a productive debate.Let’s use a physical analogy for financial markets—say, a horse race track. People take their money there, store it for some time, and take out a different amount of money when they leave, depending on the quality of their bets. If interest rates are ruled by capital supply, then making a bet on interest rates is akin to betting on how large volumes people will bet tomorrow. So if you believe that the horse race track is going to burn down tomorrow, you can of course go to the horse race track and place the bet “Trading volumes in 2 days are going to be really low”—and if you’re right about the fire, you’re likely also right about the trading volumes. But in the meantime, the horse race track burned down, and no one is left to pay out your winnings. Now of course, you can find someone who’s willing to buy you out of the bet before things burn down, if you convince them that it is a safe way to profit. You can tell everyone about the forest fire you observed nearby, and how in 24h that’s going to reach the horse track, and burn it to the ground. And people can believe your evidence. But that’s not going to get anyone to buy you out of the bet you made, since they realize that they will be left holding the burned bag—unless they can find an even bigger fool to sell to. So the only way you can profit from your knowledge of the impeding fire, is to pull all of your bets, so you don’t have cash inside the building when it burns down. And that’s going to decrease the volumes on the market a little bit, but it is a tiny fraction of the total, since there are many bettors on the horse track. Now this analogy isn’t perfect, but my point stands—the equilibrium you’re hypothesizing, doesn’t exist. If you’re hypothesizing a capital supply-side response to short AI timelines, that can only happen if a large fraction of consumers decide to decrease their savings rates, and that would likely require so overwhelming evidence for near-term AI, it would no longer be a leading indicator. (as stated in the earlier comment, I think the capital demand-side argument has more merit, however).
Okay, I have attempted to clarify my thinking on multiple occasions now. In contrast, my experience is that you seem reluctant to engage with my actual arguments, offer few new pieces of evidence, and describe my thinking in quite disparaging terms, which adds up to a poor basis for further discussion. I don’t think this is your intention, so please take this for what it is—an attempt at well-meaning feedback, and encouragement to revisit how you engage on this topic. Until I see this good-faith effort I will consider this argument closed for now.
It seems to me you don’t get the point. The point of the post is that the equilibrium you’re hypothesizing doesn’t really exist. Individuals can only amp up their own consumption by so much, so you need a ton of people partying like it’s the end of the world to move capital markets. And that’s what you’d be betting on—not if the end is near but if everyone will believe it to the degree that they materially shift their saving behavior.
At least, if you only consider the capital supply side argument in the original post, this would be why it would fail. IIRC they don’t consider the capital demand side (i.e., what companies are willing to pay for capital). If a lot of companies are suddenly willing to pay more for capital—say, because they see a bunch of capital intensive projects suddenly being in-the-money, either because new technology made new projects feasible, or because demand for their products is skyrocketing—then you could still see interest rates rise. I didn’t discuss this factor here, since that wasn’t the focus of the original post, but Carl Schulman has made it elsewhere—at The Lunar Society podcast, I think. Now if near-term TAI were to create those dynamics, then interest rates could indeed predict TAI, and the conclusion of the first post would happen to hold, though it would be for entirely different reasons than they state, and it would be contingent on the capital demand side link actually holding
Thanks Harrison! Indeed, the “holding the bag” problem is what removes the incentive to “short the world”, compared to any other short positions you may wish to take in the market (which also have a timing problem—the market can stay irrational even if you’re right—but where there is at least a market mechanism creating incentives for the market to self-correct. The “holding the bag” problem removes this self-correction incentive, so the only way to beat the market is to consume more, and so a few investors won’t unilaterally change the market price
I have updated the post to reflect this
I have now updated the post to reflect this
See my response to Carl further up. This follows from accepting the assumptions of the former post. I wanted to show that even with said assumptions, their conclusions don’t follow. But I don’t think the assumptions are realistic either.
Yes, in isolation I see how that seems to clash with what Carl is saying. But that’s after I’ve granted the limited definition of TAI (x-risk or explosive, shared growth) from the former post. When you allow for scenarios with powerful AI where savings still matter, the picture changes (and I think that’s a more accurate description of the real world). I see that I could’ve been more clear that this post was a case of “even if blindly accepting the (somewhat unrealistic) assumptions of another post, their conclusions don’t follow”, and not an attempt at describing reality as accurately as possible
I agree that the marginal value of money won’t be literally zero after TAI (in the growth scenario; if we’re all dead, then it is exactly equal to zero). But (if we still assume those two TAI scenarios are the only possible ones), on a per-dollar basis it will be much lower than today, which will massively skew the incentives for traders—in the face of uncertainty, they would need overwhelming evidence before making trades that pay off only after TAI. And importantly, if you disagree with this and believe the marginal utility of money won’t change radically, then that further undermines the point made in the original post, since their entire argument relies on the change in marginal utility—you can’t have it both ways! (why would you posit that consumers change their savings rate when there is still benefits from being richer?)
Still, I see your point that even in such a world, there’s a difference between being a trillionaire, or a quadrillionaire. If there are quadrillion-dollar profits to be made, then yes, you will get those chains of backwards induction up and working again. But I find that scenario very implausible, so in reality I don’t think this is an important consideration.
I don’t think this. Where do you think I say that?
These are the scenarios defined in the former post. I just run with the assumptions of the argument they present, and show that their conclusion doesn’t follow from those assumptions. That doesn’t mean I think all the assumptions are accurate reflections of reality. The fact that TAI can play out in many ways, and investors may have very differing beliefs about what it means for their optimal saving rate today, is just another argument for why we shouldn’t use interest rates as a measure of AI timelines, which is what I argue in this post.
Carl, I agree with everything you’re saying, so I’m a bit confused about why you think you disagree with this post.
This post is a response to the very specific case made in an earlier forum post, where they use a limited scenario to define transformative AI, and then argue that we should see interest rates rising if if traders believe that scenario to be near.
I argue that we can’t use interest rates to judge if said, specific scenario is near or not. That doesn’t mean there are no ways to bet on AI (in a broader sense). Yes, when tech firms are trading at high multiples, and valuations of companies like NVIDIA/ OpenAI/ DeepMind is growing, that’s evidence for a claim that “traders expect these technologies to become more powerful in the near-ish future”. Talking to investors provides further evidence in the same direction—I just left McKinsey, so up until recently I’ve had plenty of those conversations myself.
So this post should not be read as an argument about what the market believes, nor is it an argument for short or long timelines. It is only an argument that interest rates aren’t strong evidence either way.
No, the EMH does not imply that markets have long AGI timelines
I think I’ll try and type up my objections in a post rather than a comment—it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.
But in short, I think it’s possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, less than 1 year and lots of weird stuff going on, then you’d at least see some turbulence in the markets as folks are getting confused about how to trade in this very strange situation, so I do think the markets are providing some evidence against extremely short timelines)
I see that I wasn’t being super clear above. Others in the comments have pointed to what I was trying to say here:
- The window between when “enough” traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you’ll only increase your wealth for a very short time by making this bet
- It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation
- In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you’re poor for a much longer time)
Therefore, traders may choose not to short interest rates, even if they believe AI is imminent
(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian “foom” scenario to happen overnight for the following point to be plausible: “timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won’t make sense to bet on interest rate movements for most people”)
While this is a very valuable post, I don’t think the core argument quite holds, for the following reasons:
Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in “The Big Short” about the Financial Crisis).
In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that’s not the same as making a billion bucks.
You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this. Furthermore, counterparty risks tend to get in the way of taking up very big loans, and so they would dominate your cost of capital.
All that said, it is possible that the strategy of “people with a high x-risk estimate should use long-term loans to fund their work” is indeed a feasible funding mechanism for such work, since this would not be a bet intending to make the borrower rich—it would just be a bet to survive, although you could get poor in the process.
- 13 Jan 2023 16:45 UTC; 46 points) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (
- 9 Oct 2023 8:34 UTC; -1 points) 's comment on No, the EMH does not imply that markets have long AGI timelines by (
Thank you!
Agree with many of the considerations above—the bar should probably rise somewhat after such a funding shortfall. One way to solve it in practice could be to sit down in the room with the old FTX FF team and ask “which XX% of your grants are you most enthusiastic about and why”, and then (at least as an initial hypothesis; possibly requiring some further vetting) plan to fund that. The generalized point I’m trying to make is twofold: 1) that quite a bit of judgement already went into assessing these projects and it should be possible to use that to decide how many of them are above the bar, and 2) because all the other input factors (talent, project idea, vetting) are unchanged, and assuming a standard shape of the EA production function, the marginal returns to funding should now be unusually high.
And David is right that (at least under some reasonable models) if you can predict that your bar will fall in the future, you should probably lower it already. I’m not exactly sure what the requirements would be for the funding bar to have a Martingale property (e.g., does it require some version of risk neutrality, or specific assumptions about the shape of the impact distribution across projects or time), but it seems reasonable to start with something close to that assumption, at least. However that still implies that when you experience a large, unexpected funding shortfall, the bar does need to rise somewhat.
Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.
I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I’m sure you’ve already considered, but I’m stating them so others can also weigh in)
IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This is not the case at the moment. There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started. Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you’re unusually constrained by grantmaker capacity for a while
Temporarily ramping up funding can also be justified by considering likely flow-through effects of acting as an “insurer of last resort” for affected projects. Abrupt funding cutoffs is very costly for project founders in terms of added stress, reduced capacity to focus on doing good, and possibly long-term career prospects. If the EA community doesn’t step in to try and help the affected projects, we may expect some core team members to disengage from EA, or to shift towards less ambitious projects in the future. Furthermore, the next generation of potential founders will be watching. The more they see a community that’s willing to shoulder the cost in a downturn, the more we can expect new founders to engage with EA and take on ambitious projects.
Thank you for your good work over the last months, and thank you for your commitment to integrity in these hard times. I’m sure this must also be hard for you on a personal level, so I hope you’re able to find consolation in all the good that will be created from the projects you helped off the ground, and that you still find a home in the EA community.
Thank you Joel! I appreciate it