Research manager at ICFG.eu, board member at Langsikt.no, doing policy research to mitigate risks from biotechnology and AI. Ex-SecureBio manager, ex-McKinsey Global Institute fellow and founder of the McKinsey Effective Altruism community. Follow me on Twitter at @jgraabak
Jakob
While this is a very valuable post, I don’t think the core argument quite holds, for the following reasons:
Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in “The Big Short” about the Financial Crisis).
In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that’s not the same as making a billion bucks.
You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines—what you’re betting on then, is when the world will realize that timelines are short, since that’s what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won’t realize AI is near for a while yet, in which case you wouldn’t do this. Furthermore, counterparty risks tend to get in the way of taking up very big loans, and so they would dominate your cost of capital.
All that said, it is possible that the strategy of “people with a high x-risk estimate should use long-term loans to fund their work” is indeed a feasible funding mechanism for such work, since this would not be a bet intending to make the borrower rich—it would just be a bet to survive, although you could get poor in the process.
- 13 Jan 2023 16:45 UTC; 46 points) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (
- 12 Jan 2023 17:25 UTC; 1 point) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (LessWrong;
- 9 Oct 2023 8:34 UTC; -1 points) 's comment on No, the EMH does not imply that markets have long AGI timelines by (
No, the EMH does not imply that markets have long AGI timelines
The 3rd wave of EA is coming—what does it mean for you?
Is GiveWell underestimating the health value of lead eradication?
Visualizations of the significance—persistence—contingency framework
Posting as an individual who is a consultant, not on behalf of my employer
Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic: https://80000hours.org/articles/alternatives-to-consulting/).
First let me voice a +1 to everything Jeremy has said here already—with the possible exception that I know several McKinsey partners are interfacing with the EA movement on particular causes like Animal Welfare, Governance of AI, pandemic preparedness and climate change. However I don’t know the exact scope of our client work in either field and haven’t heard of projects for EA orgs (I’ve worked with several of these topics for the McKinsey Global Insitute, see e.g. this report: https://www.mckinsey.com/business-functions/sustainability/our-insights/climate-risk-and-response-physical-hazards-and-socioeconomic-impacts?cid=app)
Second, I’m happy to jump on a 30-60 minute call in July/August to discuss if the EACN or some of its members can be helpful in making something like this happen—you can reach me at jakob_graabak[at]mckinsey[dot]com. (Luke, Ozzie, any of the Peters, any others?)
One example of how we could help: for “Talent Loans” I can imagine that we could use the McKinsey EA Community to find the right people in a more efficient way than described above. I of course understand that most EA orgs likely won’t become regular McKinsey clients, but I can try to talk to some of our partners about how we could run 2-3 pilot projects with e.g. Open Phil in a mutually beneficial way. Perhaps that would also work as a proof of demand and would drive more people into this space.
I see that I wasn’t being super clear above. Others in the comments have pointed to what I was trying to say here:
- The window between when “enough” traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you’ll only increase your wealth for a very short time by making this bet
- It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger than to go short interest rates (e.g., they may decide to invest in tech companies), or they may decide to take some vacation
- In order to get the benefits of the best case above, you need to take on massive interest rate risk, so the downside is potentially much larger than the upside (plus, in the downside case, you’re poor for a much longer time)
Therefore, traders may choose not to short interest rates, even if they believe AI is imminent
Hi Ryan, thanks for your comment!
1) “The title should clarify that it’s “national scale” rather than scale generally that’s overrated.”
We did not use “national scale” because we cover policy making on both national-, subnational- and multinational scale. However, we agree that “scale” is very useful as a parameter in cause prioritization frameworks. You’re right that our claim is more narrow—only that it’s overrated in this specific setting.
2) “US and China are probably more likely to copy their own respective states & provinces than copy the Nordics, right?”
This is a valid point. For this reason, our logic can also be used to argue that EA should increase policy efforts in US states, or other sub-national policy entities. However, there are some policy domains that are mostly relevant on the national level (e.g. foreign policy), and there are examples where foreign examples work as better motivators (see e.g. this commercial which uses US patriotism to advocate for accelerated EV uptake in the US).
3) “Being unusually homogenous, stable, and trusting might mean that some policies work in the Nordics, even if they don’t work elsewhere.”
You’re right that some policies that work in the Nordics won’t work elsewhere! This is analogous to how some (small-scale) startups will pass a Series A round funding but not succeed at larger scale. Startups typically start with little funding and unlock increasing amounts of money. This way, if the startup fails, it fails in the cheapest possible way. Similarly, by testing new policies first in the most ideal governance environments and gradually scaling them to trickier environments with larger costs of failure, the policies that fail will do so in the least costly wa
4) “If we’re worried about whether govt pursues certain tech (like AI) safely over the coming 1-2 decades, then we should favour involvement in the executive over legislating, and the former can’t really transfer from the Nordics to the US. Diffusion may be rather slow.”
You’re right that if your main concern is linked to specific, urgent causes, you may prefer more direct routes to impact in the countries that matter most
Thank you for writing this up—I’ve wanted to do the same for a while! I think the only thing I see missing is that prizes can raise the salience of some concept or nuance, and therefore serve as a coordination mechanism in more ways than you list (e.g., say that we want more assessments of long-term interventions using the framework from WWOTF of significance—durability—contingency, then a prize for those assessments would also help signal boost the framework)
Carl, I agree with everything you’re saying, so I’m a bit confused about why you think you disagree with this post.
This post is a response to the very specific case made in an earlier forum post, where they use a limited scenario to define transformative AI, and then argue that we should see interest rates rising if if traders believe that scenario to be near.
I argue that we can’t use interest rates to judge if said, specific scenario is near or not. That doesn’t mean there are no ways to bet on AI (in a broader sense). Yes, when tech firms are trading at high multiples, and valuations of companies like NVIDIA/ OpenAI/ DeepMind is growing, that’s evidence for a claim that “traders expect these technologies to become more powerful in the near-ish future”. Talking to investors provides further evidence in the same direction—I just left McKinsey, so up until recently I’ve had plenty of those conversations myself.
So this post should not be read as an argument about what the market believes, nor is it an argument for short or long timelines. It is only an argument that interest rates aren’t strong evidence either way.
Agree with many of the considerations above—the bar should probably rise somewhat after such a funding shortfall. One way to solve it in practice could be to sit down in the room with the old FTX FF team and ask “which XX% of your grants are you most enthusiastic about and why”, and then (at least as an initial hypothesis; possibly requiring some further vetting) plan to fund that. The generalized point I’m trying to make is twofold: 1) that quite a bit of judgement already went into assessing these projects and it should be possible to use that to decide how many of them are above the bar, and 2) because all the other input factors (talent, project idea, vetting) are unchanged, and assuming a standard shape of the EA production function, the marginal returns to funding should now be unusually high.
And David is right that (at least under some reasonable models) if you can predict that your bar will fall in the future, you should probably lower it already. I’m not exactly sure what the requirements would be for the funding bar to have a Martingale property (e.g., does it require some version of risk neutrality, or specific assumptions about the shape of the impact distribution across projects or time), but it seems reasonable to start with something close to that assumption, at least. However that still implies that when you experience a large, unexpected funding shortfall, the bar does need to rise somewhat.
Thank you for your good work over the last months, and thank you for your commitment to integrity in these hard times. I’m sure this must also be hard for you on a personal level, so I hope you’re able to find consolation in all the good that will be created from the projects you helped off the ground, and that you still find a home in the EA community.
The flip side of this is that people with less existing “reputation stock” may see the potential status upside as the main compensation from a prize contest, and not the monetary benefit
You can see their rationale in their public model: https://docs.google.com/spreadsheets/d/1tytvmV_32H8XGGRJlUzRDTKTHrdevPIYmb_uc6aLeas/edit#gid=1362437801
It’s the sum of 1.7% “improving circumstances over time”, 0.9% “compounding non-monetary benefits” and 1.4% “temporal uncertainty”. They have 0.0% “pure time preference”
Have you spoken to Jona Glade about it? He’s also working on setting up a consultancy. I’m also happy to chat about this.
+1 to all Jona writes here—with the caveat that consulting firms like McKinsey or BCG can also help you scope the project and prioritize what’s most important to work on. This of course requires some level of trust (like in all professional services where the client may not know their exact needs), which strengthens the case for using EA consultants at least for pilot projects until norms around using consultants are well-established.
I think the “get lots of input in a short time from a crowd with different semi-informed opinions” feature of prizes are hard to replace through other mechanisms. Some companies have built up extensive expert networks that they can call on-demand to do this, but that still doesn’t have quite the same agility. However, in those cases you may often want to compensate more than just the best entry (in line with the OP)
(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian “foom” scenario to happen overnight for the following point to be plausible: “timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won’t make sense to bet on interest rate movements for most people”)
One interesting debate would be: what’s the optimal % of funding that should go to prizes? Which parameters would allow us to determine this? One can imagine that the % should be higher in communities that are struggling more to hire enough, or where research agendas are unclear so more coordination is needed, but should be lower in communities with people with low savings, or where the funders have capacity to diversify risks.
One additional consideration is that the coordination benefits from prizes (in raising the salience of memes or the status of the winners) comes at an attention cost, so a large number of prizes may cannibalize on our “common knowledge budget” (if there is a limit to how much common knowledge we can generate)
Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.
I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I’m sure you’ve already considered, but I’m stating them so others can also weigh in)
IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This is not the case at the moment. There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started. Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you’re unusually constrained by grantmaker capacity for a while
Temporarily ramping up funding can also be justified by considering likely flow-through effects of acting as an “insurer of last resort” for affected projects. Abrupt funding cutoffs is very costly for project founders in terms of added stress, reduced capacity to focus on doing good, and possibly long-term career prospects. If the EA community doesn’t step in to try and help the affected projects, we may expect some core team members to disengage from EA, or to shift towards less ambitious projects in the future. Furthermore, the next generation of potential founders will be watching. The more they see a community that’s willing to shoulder the cost in a downturn, the more we can expect new founders to engage with EA and take on ambitious projects.