Or you might like to look into Christian’s grantmaking at Founders Pledge: https://80000hours.org/after-hours-podcast/episodes/christian-ruhl-nuclear-catastrophic-risks-philanthropy/
Benjamin_Todd
Thanks that’s helpful background!
I agree tractability of the space is the main counterargument, and MacArthur might have had good reasons to leave. Like I say in the post, I’d suggest people think about this issue carefully if you’re interested in giving to this area.
I don’t focus exclusively on philanthropic funding. I added these paragraphs to the post to clarify my position:
I agree that a full accounting of neglectedness should consider all resources going towards the cause (not just philanthropic ones), and that ‘preventing nuclear war’ more broadly receives significant attention from defence departments. However, even considering those resources, it still seems similarly neglected as biorisk.
And the amount of philanthropic funding still matters because certain important types of work in the space can only be funded by philanthropists (e.g. lobbying or other policy efforts you don’t want to originate within a certain national government).
I’d add that if if there’s almost no EA-inspired funding in a space, there’s likely to be some promising gaps by someone applying that mindset.
In general, it’s a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it’s also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).
--
Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust. Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.
E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective. But once you’ve spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.
It might take more than $1bn, but around that level, you could become a major funder of one of the causes like AI safety, so you’d already be getting significant benefits within a cause.
Agree you’d need to average 2x for the last point to work.
Though note the three pathways to impact—talent, intellectual diversity, OP gaps—are mostly independent, so you’d only need one of them to work.
Also agree in practice there would be some funging between the two, which would limit the differences, that’s a good point.
I’d also be interested in that. Maybe worth adding that the other grantmaker, Matthew, is younger. He graduated in 2015 so is probably under 32.
Intellectual diversity seems very important to figuring out the best grants in the long term.
If atm the community, has, say $20bn to allocate, you only need a 10% improvement to future decisions to be worth +$2bn.
Funder diversity also seems very important for community health, and therefore our ability to attract & retain talent. It’s not attractive to have your org & career depend on such a small group of decision-makers.
I might quantify the value of the talent pool around another $10bn, so again, you only need a ~10% increase here to be worth a billion, and over centralisation seems like one of the bigger problems.
The current situation also creates a single point of failure for the whole community.
Finally it still seems like OP has various kinds of institutional bottlenecks that mean they can’t obviously fund everything that would be ‘worth’ funding in abstract (and even moreso to do all the active grantmaking that would be worth doing). They also have PR constraints that might make some grants difficult. And it seems unrealistic to expect any single team (however good they are) not to have some blindspots.
$1bn is only 5% of the capital that OP has, so you’d only need to find a 1 grant for every 20 that OP makes that they’ve missed with only 2x the effectiveness of marginal OP grants in order to get 2x the value.
One background piece of context is that I think grants often vary by more than 10x in cost-effectiveness.
Nuclear security seems like an interesting funding gap
One quick point is divesting, while it would help a bit, wouldn’t obviously solve the problems I raise – AI safety advocates could still look like alarmists if there’s a crash, and other investments (especially including crypto) will likely fall at the same time, so the effect on the funding landscape could be similar.
With divestment more broadly, it seems like a difficult question.
I share the concerns about it being biasing and making AI safety advocates less credible, and feel pretty worried about this.
On the other side, if something like TAI starts to happen, then the index will go from 5% AI-companies to 50%+ AI companies. That’ll mean AI stocks will outperform the index by ~10x or more, while non-AI stocks will underperform by 2x or more.
So by holding the index, you’d be forgoing 90%+ of future returns (in the most high leverage scenarios), and being fully divested, giving up 95%+.
So the costs are really big (far far greater than divesting from oil companies).
Moreover, unless your p(doom) is very high, it’s plausible a lot of the value comes from what you could do in post-TAI worlds. AI alignment isn’t the only cause to consider.
On balance, it doesn’t seem like the negatives are so large as to reduce the value of your funds by 10x in TAI worlds. But I feel uneasy about it.
I want to be clear it’s not obvious to me OP is making a mistake. I’d lean towards guessing AI safety and GCBRs are still more pressing than nuclear security. OP also have capacity constraints (which make it e.g. less attractive to pursue smaller grants in areas they’re not already covering, since it uses up time that could have been used to make even larger grants elsewhere). Seems like a good fit for some medium-sized donors who want to specialise in this area.
Interesting. I guess a key question is whether another wave of capabilities (e.g. gpt-5, agent models) comes in soon or not.
Agree it’s most likely already in the price.
Though I’d stand behind the idea that markets are least efficient when it comes to big booms and busts involving large asset classes (in contrast to relative pricing within a liquid asset class), which makes me less inclined to simply accept market prices in these cases.
You could look for investments that do neutral-to-well in a TAI world, but have low-to-negative correlation to AI stocks in the short term. That could reduce overall portfolio risk but without worsening returns if AI does well.
This seems quite hard, but the best ideas I’ve seen so far are:
The cluster of resources companies, electricity producers, commodities, land. There’s reason to think these could do quite well during a TAI transition, but in the short term they do well when inflation rises, which tends to be bad for AI stocks. (And they were effective hedges in the most recent draw down and in 2022.) Some of them also look quite cheap at the moment. However, in a recession, they will fall at the same time as AI stocks.
Short long-dated government bonds or AI company credit. In the short term helps to hedge out the interest rate and inflation exposure in AI companies, and should also do well in the long term if an AI boom increases interest rates. Credit spreads are narrow so you’re not paying much for the hedge. However, if there’s a recession, these will also do badly.
Index shorts (especially focused on old economy stocks). This could reduce overall market risk, and AI stocks will most likely fall at the same time as other stocks. If you buy long dated put options there’s some reason to think AI will increase volatility, so you might also benefit a little there. However, on net it might be desirable to have high market exposure / this trade most likely loses money.
Long-short multi-asset trend-following. This is an active strategy (so you might be skeptical that it works) but tends to do well during macro regime changes / big market crashes / high volatility, which will likely be times when AI stocks are doing badly. But for the same reasons it could also do well during an AI boom.
However, all of these have important downsides and someone would need to put billions of dollars behind them to have much impact on the overall portfolio.
(Also this is not investment advice and these ideas are likely to lose a lot of money in many scenarios.)
I should have maybe added that several people mentioned “people who can practically get stuff done” is still a big bottleneck..
My impression is that of EA resources focused on catastrophic risk, 60%+ are now focused on AI safety, or issues downstream of AI (e.g. even the biorisk people are pretty focused on the AI/Bio intersection).
AI has also seem dramatic changes to the landscape / situation in the last ~2 years, and my update was focused on how things have changed recently.
So for both reasons most of the updates that seemed salient to me concerned AI in some way.That said, I’m especially interested in AI myself, so I focused more on questions there. It would be ideal to hear from more bio people.
I also briefly mention nuclear security, where I think the main update is the point about lack of funding.
AI stocks could crash. And that could have implications for AI safety
Hi Wayne,
Those are good comments!
On the timing of the profits, my first estimate is for how far profits will need to eventually rise.
To estimate the year-by-year figures, I just assume revenues grow at the 5yr average rate of ~35% and check that’s roughly in line with analyst expectations. That’s a further extrapolation, but I found it helpful to get a sense of a specific plausible scenario.
(I also think that if Nvidia revenue looked to be under <20% p.a. the next few quarters, the stock would sell off, though that’s just a judgement call.)
On the discount rate, my initial estimate is for the increase in earnings for Nvidia relative to other companies (which allows us to roughly factor out the average market discount rate) and assuming that Nvidia is roughly as risky as other companies.
In the appendix I discuss how if Nvidia is riskier than other companies it could change the estimate. Using Nvidia’s beta as an estimate of the riskiness doesn’t seem to result in a big change to the bottom line.
I agree analyst expectations are a worse guide than market prices, which is why I tried to focus on market prices wherever possible.
The GPU lifespan figures come in when going from GPU spending to software revenues. (They’re not used for Nvidia’s valuation.)
If $100bn is spent on GPUs this year, then you can amortise that cost over the GPU’s lifespan.
A 4 year lifespan would mean data centre companies need to earn at least $25bn of revenues per year for the next 4 years to cover those capital costs. (And then more to pay for the other hardware and electricity they need, as well as profit.)
On consumer value, I was unsure whether to just focus on revenues or make this extra leap. The reason I was interested in it is I wanted to get a more intuitive sense of the scale of the economic value AI software would need to create, in terms that are closer to GDP, or % of work tasks automated, or consumer surplus.
Consumer value isn’t a standard term, but if you subtract the cost of the AI software from it, you get consumer surplus (max WTP—price). Arguably the consumer surplus increase will be equal to the GDP increase. However, I got different advice on how to calculate the GDP increase, so I left it at consumer value.
I agree looking at more historical case studies of new technologies being introduced would be interesting. Thanks for the links!
I mean “enter the top of the funnel”.
For example, if you advertise an event as being about it, more people will show up to the event. Or more people might sign up to a newsletter.
(We don’t yet know how this translates into more intense forms of engagement.)
It’s fair that I only added “(but not more)” to the forum version – it’s not in the original article which was framed more like a lower bound. Though, I stand by “not more” in the sense that the market isn’t expecting it to be *way* more, as you’d get in an intelligence explosion or automation of most of the economy. Anyway I edited it a bit.
I’m not taking revenue to be equivalent to value. I define value as max consumer willingness to pay, which is closely related to consumer surplus.
I agree risk also comes into it – it’s not a risk-neutral expected value (I discuss that in the final section of the OP).
Interesting suggestion that the Big 5 are riskier than Nvidia. I think that’s not how the market sees it – the big 5 have lower price & earnings volatility and lower beta. Historically chips have been very cyclical. The market also seems to think there’s a significant chance Nvidia loses market share to TPUs or AMD. I think the main reason Nvidia has a higher PE ratio is due its earnings growth.
I agree people often overlook that (and also future resources).
I think bio and climate change also have large cumulative resources.
But I see this as a significant reason in favour of AI safety, which has become less neglected on an annual basis recently, but is a very new field compared to the others.
Also a reason in favour of the post-TAI causes like digital sentience.