In summary, if we expect AI companies to beat the market, then we may or may not prefer to invest in AI stocks on the margin, and it could easily go either way depending on what assumptions we make.
As you noted, I think this is an important caveat/assumption, and should probably be highlighted more in the intro.
As I read/skimmed this, I realized that I was confused because (mostly due to other people’s) talking about AI mission hedging is confused and doesn’t clearly differentiate between two different claims:
We should mission hedge AI progress, because money matters more in worlds where AI makes more progress (what your post tries to answer)
We should invest in AI, because (some) EAs believe that we know a “secret” about the world, namely that other people are underestimating AI progress.
Your post fairly clearly answers the first claim (in the negative) but does not shed much light on the second.
One thing I struggle with is writing summaries. My natural inclination is to want to include every possible caveat and clarification, but then the summary turns into the whole essay. My general approach is to write the summary for someone with a lot of context who fully trusts me, and then in the rest of the post, assume low context / low trust.
If there are certain issues that people are particularly likely to get hung up on, it can make sense to include them in the summary. The fact that you had this confusion suggests that it’s common, so I figured it’s worth adding.
FWIW I’ve spent a bit of time thinking about #2 without much progress.
A few disjointed thoughts:
Things in the same reference class as AI (i.e., growth stocks) have systematically underperformed the market, but the top-performing stocks are almost always growth stocks.
How much should we act on idiosyncratic beliefs? Some EAs (eg SBF) clearly have market-beating skill, although Alameda-style short-term trading using math has much quicker feedback loops than trying to predict long-term market trends, so it’s much easier to know whether you’re good at it. I’m not sure anyone in history has been demonstrably good at predicting long-term trends. (Maybe George Soros? Not too familiar with his work.)
EAs are thinking much more than most people about the impact of transformative AI. On the other hand, EA AI timeline forecasts are pretty similar to expert forecasts, and some market participants are extremely bullish on AI/technological growth (eg Cathie Wood).
Investing in AI only makes sense given certain assumptions: there must be a slow takeoff, you must invest in the company(ies) that succeed, and the benefits must accrue disproportionately to those companies (rather than to the whole economy, which you might see if eg OpenAI is the market leader).
Under the right circumstances, investing in AI might generate some insane amount of utility like 10^20 times bigger than the current value of earth. Not sure how to think about that. Does the EV calculation say to invest in AI for that small chance of a 10^20 gain?
I’d like to have an idea of how to value AI companies. How much should we be willing to pay? Even if we would pay more for AI companies than for the broad market, AI companies are already expensive vs the market. Are they too expensive, or still cheap enough to be worth buying?
I had an idea a few months ago for how to allocate to AI stocks. It seemed promising but it’s incomplete. I will post my personal notes below in case anyone’s interested:
Let our investment thesis be:
There will be a slow takeoff
Takeoff will start soon enough before singularity that we have time to spend most of our money on things we care about
Takeoff will be driven by publicly-traded companies
Under thesis, suppose optimal strategy is to invest in each company in proportion to how much of AI progress it captures
This makes intuitive sense. Not sure how to justify
Sort of like risk parity
Sort of like fundamental weighting, where companies get weight in proportion to their value
If thesis is wrong, invest in factor portfolio
Say 35% chance that thesis is true: 70% chance of slow takeoff * 50% chance that takeoff happens slowly enough for us to spend most of our money
I actually think there’s less than a 70% chance of slow takeoff but whatever
Could better estimate this probability by drawing some overlapping curves of takeoff speeds + achievable spending rates
Stock proportions should be a weighted average of the optimal proportion in true-thesis vs. false-thesis worlds
growth-y AI stocks will probably end up underweighted unless they look particularly likely to get big gains in true-thesis world
value-y AI stocks will end up double-overweighted (not many of these. FB is one example)
A fundamental index underweights mega-cap tech companies by 0.5pp to 3pp. The thesis maybe overweights mega-cap tech companies by 5pp to 8pp. At 1⁄3 chance thesis and 2⁄3 chance not-thesis, net overweight is roughly +1pp
But depends on individual company, eg NVDA would probably get net overweight, while AMZN gets net underweight
I think I broadly agree with this decomposition (not that I know much about the field or anything). Some specific disagreements:
Under the right circumstances, investing in AI might generate some insane amount of utility like 10^20 times bigger than the current value of earth. Not sure how to think about that. Does the EV calculation say to invest in AI for that small chance of a 10^20 gain?
I think basically (within the space of longtermist interventions) a lot of these concerns approximately add up to normality. Investing in AI might generate 10^20 utility than the current value of Earth, sure, but most plausible x-risk interventions will be within a small number of OOMs of this as well, as well a fair number of longtermist movement building interventions.
Let our investment thesis be:
There will be a slow takeoff
Takeoff will start soon enough before singularity that we have time to spend most of our money on things we care about
Takeoff will be driven by publicly-traded companies
Re 2: Maybe you’re already thinking of this (otherwise “50% chance that takeoff happens slowly enough for us to spend most of our money” feels a bit high), but one thing to keep in mind is that we’re still operating in a world where markets are mostly rational. The investment thesis implicitly is betting on EAs knowing an open “secret” about the world (specifically, the rest of the world undervalues AI in the medium-long term). However, this doesn’t mean the financial world will keep being “irrational” (by our lights) about AI. We might expect this secret to become apparent to the rest of the world well before AI is actually contributing to speeding up GDP doublings in the technical “slow takeoff” ways.
Unfortunately timing the market is famously hard and I’m not sure there’s a reasonable way to model this (even for people who legitimately know secrets, pricing seems a lot easier than timing). So I don’t have great ideas of how to model “when will people wake up to AI, conditional upon slow-takeoff EAs being right about AI.” Though I have a few mediocre ideas like starting with an ignorance prior and for you to interview EAs in hedge funds to see if they have the relevant psychological insights.
Thanks for this post.
As you noted, I think this is an important caveat/assumption, and should probably be highlighted more in the intro.
As I read/skimmed this, I realized that I was confused because (mostly due to other people’s) talking about AI mission hedging is confused and doesn’t clearly differentiate between two different claims:
We should mission hedge AI progress, because money matters more in worlds where AI makes more progress (what your post tries to answer)
We should invest in AI, because (some) EAs believe that we know a “secret” about the world, namely that other people are underestimating AI progress.
Your post fairly clearly answers the first claim (in the negative) but does not shed much light on the second.
Yeah I’m explicitly not addressing #2 because it would require an entirely different approach. I can edit the intro to clarify.
Great, thanks for the edits! :)
Agreed that it’d take a different approach and you can’t be expected to do everything!
One thing I struggle with is writing summaries. My natural inclination is to want to include every possible caveat and clarification, but then the summary turns into the whole essay. My general approach is to write the summary for someone with a lot of context who fully trusts me, and then in the rest of the post, assume low context / low trust.
If there are certain issues that people are particularly likely to get hung up on, it can make sense to include them in the summary. The fact that you had this confusion suggests that it’s common, so I figured it’s worth adding.
FWIW I’ve spent a bit of time thinking about #2 without much progress.
A few disjointed thoughts:
Things in the same reference class as AI (i.e., growth stocks) have systematically underperformed the market, but the top-performing stocks are almost always growth stocks.
How much should we act on idiosyncratic beliefs? Some EAs (eg SBF) clearly have market-beating skill, although Alameda-style short-term trading using math has much quicker feedback loops than trying to predict long-term market trends, so it’s much easier to know whether you’re good at it. I’m not sure anyone in history has been demonstrably good at predicting long-term trends. (Maybe George Soros? Not too familiar with his work.)
EAs are thinking much more than most people about the impact of transformative AI. On the other hand, EA AI timeline forecasts are pretty similar to expert forecasts, and some market participants are extremely bullish on AI/technological growth (eg Cathie Wood).
Investing in AI only makes sense given certain assumptions: there must be a slow takeoff, you must invest in the company(ies) that succeed, and the benefits must accrue disproportionately to those companies (rather than to the whole economy, which you might see if eg OpenAI is the market leader).
Under the right circumstances, investing in AI might generate some insane amount of utility like 10^20 times bigger than the current value of earth. Not sure how to think about that. Does the EV calculation say to invest in AI for that small chance of a 10^20 gain?
I’d like to have an idea of how to value AI companies. How much should we be willing to pay? Even if we would pay more for AI companies than for the broad market, AI companies are already expensive vs the market. Are they too expensive, or still cheap enough to be worth buying?
I had an idea a few months ago for how to allocate to AI stocks. It seemed promising but it’s incomplete. I will post my personal notes below in case anyone’s interested:
Let our investment thesis be:
There will be a slow takeoff
Takeoff will start soon enough before singularity that we have time to spend most of our money on things we care about
Takeoff will be driven by publicly-traded companies
Under thesis, suppose optimal strategy is to invest in each company in proportion to how much of AI progress it captures
This makes intuitive sense. Not sure how to justify
Sort of like risk parity
Sort of like fundamental weighting, where companies get weight in proportion to their value
If thesis is wrong, invest in factor portfolio
Say 35% chance that thesis is true: 70% chance of slow takeoff * 50% chance that takeoff happens slowly enough for us to spend most of our money
I actually think there’s less than a 70% chance of slow takeoff but whatever
Could better estimate this probability by drawing some overlapping curves of takeoff speeds + achievable spending rates
Stock proportions should be a weighted average of the optimal proportion in true-thesis vs. false-thesis worlds
growth-y AI stocks will probably end up underweighted unless they look particularly likely to get big gains in true-thesis world
value-y AI stocks will end up double-overweighted (not many of these. FB is one example)
A fundamental index underweights mega-cap tech companies by 0.5pp to 3pp. The thesis maybe overweights mega-cap tech companies by 5pp to 8pp. At 1⁄3 chance thesis and 2⁄3 chance not-thesis, net overweight is roughly +1pp
But depends on individual company, eg NVDA would probably get net overweight, while AMZN gets net underweight
INTC is overweighted by both value and thesis
I think I broadly agree with this decomposition (not that I know much about the field or anything). Some specific disagreements:
I think basically (within the space of longtermist interventions) a lot of these concerns approximately add up to normality. Investing in AI might generate 10^20 utility than the current value of Earth, sure, but most plausible x-risk interventions will be within a small number of OOMs of this as well, as well a fair number of longtermist movement building interventions.
Re 2: Maybe you’re already thinking of this (otherwise “50% chance that takeoff happens slowly enough for us to spend most of our money” feels a bit high), but one thing to keep in mind is that we’re still operating in a world where markets are mostly rational. The investment thesis implicitly is betting on EAs knowing an open “secret” about the world (specifically, the rest of the world undervalues AI in the medium-long term). However, this doesn’t mean the financial world will keep being “irrational” (by our lights) about AI. We might expect this secret to become apparent to the rest of the world well before AI is actually contributing to speeding up GDP doublings in the technical “slow takeoff” ways.
Unfortunately timing the market is famously hard and I’m not sure there’s a reasonable way to model this (even for people who legitimately know secrets, pricing seems a lot easier than timing). So I don’t have great ideas of how to model “when will people wake up to AI, conditional upon slow-takeoff EAs being right about AI.” Though I have a few mediocre ideas like starting with an ignorance prior and for you to interview EAs in hedge funds to see if they have the relevant psychological insights.