I like the general idea that AI timelines matter for all altruists, but I really don’t think it’s a good idea to try to “beat the market” like this. The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they’ll spend billions bidding up the stock price until they’re no longer undervalued.
Thinking that Google and Co are going to outperform the S&P500 over the next few decades might not sound like a super bold belief—but it should. It assumes that you’re capable of making better predictions than the aggregate stock market. Don’t bet on beating markets.
The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they’ll spend billions bidding up the stock price until they’re no longer undervalued.
That sounds like a nice world, but unfortunately I don’t think that the market is quite that efficient. (Like the parent, I’m not going to offer any evidence, just express my view.)
You could reply, “then why ain’cha rich?” but it doesn’t really work quantitatively for mispricings that would take 10+ years to correct. You could instead ask “then why ain’cha several times richer than you otherwise would be?” but lots of people are in fact several times richer than they otherwise would be after a lifetime of investment. It’s not anything mind-blowing or even obvious to an external observer.
“Don’t try to beat the market” still seems like a good heuristic, I just think this level of confidence in the financial system is misplaced and “hyper-informed” in particular is really overstating it. (As is “incredibly high prior” elsewhere.)
(ETA: I also agree that if you think you have a special insight about AI, there are likely to be better things to do with it.)
I don’t think my argument here is analogous to trying to beat the market. (i.e. I’m not arguing that AI research companies are currently undervalued.)
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
My interpretation of Premise 4 (“Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI”) is that Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities. This does not seem like a controversial claim since it is analogous to the stocks of biotech companies skyrocketing after good news is announced like regulatory approval to launch a drug. The market may have priced the probability of a company leading a slow AI takeoff in the next X years, but having that actually happen is an entirely different story.
The concept of investing in something that generates a lot of capital if something “bad” happens like investing in AI stocks with the goal of having a lot more capital to deploy if a suboptimal/dangerous AI takeoff occurs is known as “mission hedging.” EAs have covered this topic, for example Hauke’s 2018 article A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good. Mission hedging is currently a recommended research topic on Effective Thesis.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes. EAs will have different answers to this question depending on their model of how they can deploy funds now and in the future to impact the world.
The title of the article (“If slow-takeoff AGI is somewhat likely, don’t give now” at the time of writing) implies giving now is bad because mission hedging all of that money for the purpose of donating later will lead to better outcomes. I believe the article should be modified to indicate that EAs should evaluate employing mission hedging for part or all of their intended donations rather than suggest that putting all intended donations towards mission hedging and ceasing to donate now is an obviously better option. After all, a hedge is commonly known as a protective measure against certain outcomes, not the sole strategy at work.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes.
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.
Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities.
Perhaps that, but even if they don’t, the returns from a market-tracking index fund could be very high in the case of transformative AI.
I’m imagining two scenarios:
1. AI research progresses & AI companies start to have higher-than-average returns
2. AI research progresses & the returns from this trickle through the whole market (but AI companies don’t have higher-than-average returns)
A version of the argument applies to either scenario.
The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.
I totally might be misunderstanding your point, but here’s the contradiction as I see it. If you believe (A) the S&P500 doesn’t give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.
I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you’re hoping to predict the future of AI better than the market, I’d say the expected value of AI is already reflected in tech stock prices.
To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.
(A) the S&P500 doesn’t give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued..., then you cannot believe that (C) AI stock is a better investment opportunity than any other.
I’m engaging the question of whether to make substantial donations now or whether to save for later. I don’t have a strong view on what investments are the best savings vehicle, though I do have an intuition that the market is undervaluing the growth potential of AI-intensive companies.
So I suppose I disagree with both (A) and (B). I think the S&P 500 probably will generate high enough returns to justify investing instead of donations, and I think AI companies are somewhat undervalued.
To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued
We may be using different definitions of undervalued (see this comment). In the sense that I think AI companies are worth investing in because I think their stock price will be higher in future, I agree they’re “undervalued.”
But I don’t think they’re undervalued in the sense that the market is mis-valuing their current assets, etc. If their stock price is higher in the future, I’d expect this to be because they’ve made real productivity gains.
Also probably worth clarifying that the “slow” in slow takeoff is still incredibly fast compared to historical economic growth. (See the graph in Paul’s takeoff post.)
It seems plausible that in the slow-takeoff scenario, almost all returns to GDP growth are accruing to those who own capital, and in particular those who own the companies driving the growth.
(This is all highly speculative and is making assumptions in the background, e.g. property rights still being meaningful in a slow-takeoff scenario.)
I think the background assumptions are probably doing a lot of work here. You’d have to go really far into the weeds of AI forecasting to get a good sense of what factors push which directions, but I can come up with a million possible considerations.
Maybe slow takeoff is shortly followed by the end of material need, making any money earned in a slow takeoff scenario far less valuable. Maybe the government nationalizes valuable AI companies. Maybe slow takeoff doesn’t really begin for another 50 years. Maybe the profits of AI will genuinely be broadly distributed. Maybe current companies won’t be the ones to develop transformative AI. Maybe investing in AI research increases AI x-risks, by speeding up individual companies or causing a profit-driven race dynamic.
It’s hard to predict when AI will happen, it’s worlds harder to translate that into present day stock-picking advice. If you’ve got a world class understanding of the issues and spend a lot of time on it, then you might reasonably believe you can outpredict the market. But beating the market is the only way to generate higher than average returns in the long run.
But beating the market is the only way to generate higher than average returns in the long run.
I’m not claiming that investing in AI companies will generate higher-than-average returns in the long run.
I’m claiming that an altruist’s marginal dollar is better put towards investment (in AI companies or in the S&P 500) than towards present-day donations.
Fantastic, I completely agree, so I don’t think we have any substantive disagreement.
I guess my only remaining question would then be: should your AI predictions ever influence your investing vs donating behavior? I’d say absolutely not, because you should have incredibly high priors on not beating the market. If your AI predictions imply that the market is wrong, that’s just a mark against your AI predictions.
You seem inclined to agree: The only relevant factor for someone considering donation vs investment is expected future returns. You agree that we shouldn’t expect AI companies to generate higher-than-average returns in the long run. Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don’t expect AI companies to have higher-than-average future returns.
Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don’t expect AI companies to have higher-than-average future returns.
I do think that your AI predictions should bear on your decision to invest or donate now, as even if AI companies won’t have higher-than-average returns, the average return of future firms could be extremely high (given productivity gains unlocked by AI), and it would be a shame to miss out on that return because you donated the money you otherwise would have invested (in a basket AI companies or a broader index fund like VTSAX, wherever).
If AI research companies aren’t currently undervalued, then your Premise 4 (being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI) is incorrect, because the market will have anticipated those outsized returns and priced them in to the current share price.
Hm, I guess so, but wouldn’t all investing be value investing under this framing? (i.e. it’ll always be the case that when I make an investment, I’m expecting that the investment is a good deal / will increase in the future / the current price is “too low” given what I think the future will be like.)
(edited) I just saw your link above about growth vs value investing. I don’t think that’s a helpful distinction in this case, and when people talk about a company being undervalued I think that typically includes both unrecognised growth potential and unrecognised current value. (Maybe that’s less true for startups, but we’re talking about already-listed companies here).
I do think the core claim of “if AGI will be as big a deal as we think it’ll be, then the markets are systematically undervaluing AI companies” is a reasonable one, but the arguments you’ve given here aren’t precise enough to justify confidence, especially given the aforementioned need for caution. For example, premise 4 doesn’t actually follow directly from premise 3 because the returns could be large but not outsized compared with other investments. I think you can shore that link up, but not without contradicting your other point:
I’m not claiming that investing in AI companies will generate higher-than-average returns in the long run.
Which means (under the definition I’ve been using) that you’re not claiming that they’re undervalued.
...when people talk about a company being undervalued I think that typically includes both unrecognised growth potential and unrecognised current value.
I think it’s a spectrum:
Value stocks are where most of the case for investment is from the market is mis-pricing the firm’s current operations
Growth stocks are where most of the case for investment is from the future (expected) growth of the firm
For example, premise 4 doesn’t actually follow directly from premise 3 because the returns could be large but not outsized compared with other investments.
Agreed; I clarified my position after Aidan pointed this out: (1, 2)
I like the general idea that AI timelines matter for all altruists, but I really don’t think it’s a good idea to try to “beat the market” like this. The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they’ll spend billions bidding up the stock price until they’re no longer undervalued.
Thinking that Google and Co are going to outperform the S&P500 over the next few decades might not sound like a super bold belief—but it should. It assumes that you’re capable of making better predictions than the aggregate stock market. Don’t bet on beating markets.
That sounds like a nice world, but unfortunately I don’t think that the market is quite that efficient. (Like the parent, I’m not going to offer any evidence, just express my view.)
You could reply, “then why ain’cha rich?” but it doesn’t really work quantitatively for mispricings that would take 10+ years to correct. You could instead ask “then why ain’cha several times richer than you otherwise would be?” but lots of people are in fact several times richer than they otherwise would be after a lifetime of investment. It’s not anything mind-blowing or even obvious to an external observer.
“Don’t try to beat the market” still seems like a good heuristic, I just think this level of confidence in the financial system is misplaced and “hyper-informed” in particular is really overstating it. (As is “incredibly high prior” elsewhere.)
(ETA: I also agree that if you think you have a special insight about AI, there are likely to be better things to do with it.)
I don’t think my argument here is analogous to trying to beat the market. (i.e. I’m not arguing that AI research companies are currently undervalued.)
I’m saying that in slow-takeoff scenarios, AI research companies would have a ton of growth potential.
See growth vs. value investing.
Edit: clarified my view in this comment.
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
My interpretation of Premise 4 (“Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI”) is that Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities. This does not seem like a controversial claim since it is analogous to the stocks of biotech companies skyrocketing after good news is announced like regulatory approval to launch a drug. The market may have priced the probability of a company leading a slow AI takeoff in the next X years, but having that actually happen is an entirely different story.
The concept of investing in something that generates a lot of capital if something “bad” happens like investing in AI stocks with the goal of having a lot more capital to deploy if a suboptimal/dangerous AI takeoff occurs is known as “mission hedging.” EAs have covered this topic, for example Hauke’s 2018 article A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good. Mission hedging is currently a recommended research topic on Effective Thesis.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes. EAs will have different answers to this question depending on their model of how they can deploy funds now and in the future to impact the world.
The title of the article (“If slow-takeoff AGI is somewhat likely, don’t give now” at the time of writing) implies giving now is bad because mission hedging all of that money for the purpose of donating later will lead to better outcomes. I believe the article should be modified to indicate that EAs should evaluate employing mission hedging for part or all of their intended donations rather than suggest that putting all intended donations towards mission hedging and ceasing to donate now is an obviously better option. After all, a hedge is commonly known as a protective measure against certain outcomes, not the sole strategy at work.
I agree this makes more sense in terms of mission hedging
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.
Thanks for tying this to mission hedging – definitely seems related.
Perhaps that, but even if they don’t, the returns from a market-tracking index fund could be very high in the case of transformative AI.
I’m imagining two scenarios:
1. AI research progresses & AI companies start to have higher-than-average returns
2. AI research progresses & the returns from this trickle through the whole market (but AI companies don’t have higher-than-average returns)
A version of the argument applies to either scenario.
Clarified my view somewhat in this reply to Aidan.
The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.
I totally might be misunderstanding your point, but here’s the contradiction as I see it. If you believe (A) the S&P500 doesn’t give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.
I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you’re hoping to predict the future of AI better than the market, I’d say the expected value of AI is already reflected in tech stock prices.
To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.
I’m engaging the question of whether to make substantial donations now or whether to save for later. I don’t have a strong view on what investments are the best savings vehicle, though I do have an intuition that the market is undervaluing the growth potential of AI-intensive companies.
So I suppose I disagree with both (A) and (B). I think the S&P 500 probably will generate high enough returns to justify investing instead of donations, and I think AI companies are somewhat undervalued.
We may be using different definitions of undervalued (see this comment). In the sense that I think AI companies are worth investing in because I think their stock price will be higher in future, I agree they’re “undervalued.”
But I don’t think they’re undervalued in the sense that the market is mis-valuing their current assets, etc. If their stock price is higher in the future, I’d expect this to be because they’ve made real productivity gains.
Also probably worth clarifying that the “slow” in slow takeoff is still incredibly fast compared to historical economic growth. (See the graph in Paul’s takeoff post.)
It seems plausible that in the slow-takeoff scenario, almost all returns to GDP growth are accruing to those who own capital, and in particular those who own the companies driving the growth.
(This is all highly speculative and is making assumptions in the background, e.g. property rights still being meaningful in a slow-takeoff scenario.)
I think the background assumptions are probably doing a lot of work here. You’d have to go really far into the weeds of AI forecasting to get a good sense of what factors push which directions, but I can come up with a million possible considerations.
Maybe slow takeoff is shortly followed by the end of material need, making any money earned in a slow takeoff scenario far less valuable. Maybe the government nationalizes valuable AI companies. Maybe slow takeoff doesn’t really begin for another 50 years. Maybe the profits of AI will genuinely be broadly distributed. Maybe current companies won’t be the ones to develop transformative AI. Maybe investing in AI research increases AI x-risks, by speeding up individual companies or causing a profit-driven race dynamic.
It’s hard to predict when AI will happen, it’s worlds harder to translate that into present day stock-picking advice. If you’ve got a world class understanding of the issues and spend a lot of time on it, then you might reasonably believe you can outpredict the market. But beating the market is the only way to generate higher than average returns in the long run.
I’m not claiming that investing in AI companies will generate higher-than-average returns in the long run.
I’m claiming that an altruist’s marginal dollar is better put towards investment (in AI companies or in the S&P 500) than towards present-day donations.
Fantastic, I completely agree, so I don’t think we have any substantive disagreement.
I guess my only remaining question would then be: should your AI predictions ever influence your investing vs donating behavior? I’d say absolutely not, because you should have incredibly high priors on not beating the market. If your AI predictions imply that the market is wrong, that’s just a mark against your AI predictions.
You seem inclined to agree: The only relevant factor for someone considering donation vs investment is expected future returns. You agree that we shouldn’t expect AI companies to generate higher-than-average returns in the long run. Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don’t expect AI companies to have higher-than-average future returns.
Would you agree with that?
I feel somewhat confused about whether to expect that AI companies will beat the broader market.
On one hand, I have an intuition that the current market price hasn’t fully baked in the implications of future AI development. (Especially when I see things like most US executives thinking that AI will have less of an impact than the internet did.)
On the other, I accord with your point about it being very hard to “beat the market” and generally have a high prior about markets being efficient.
Inadequate Equilibria seems relevant here.
I do think that your AI predictions should bear on your decision to invest or donate now, as even if AI companies won’t have higher-than-average returns, the average return of future firms could be extremely high (given productivity gains unlocked by AI), and it would be a shame to miss out on that return because you donated the money you otherwise would have invested (in a basket AI companies or a broader index fund like VTSAX, wherever).
Also, I was being somewhat sloppy in the post on this point – thanks for pushing on it!
I’ve edited the post to better reflect my view.
If AI research companies aren’t currently undervalued, then your Premise 4 (being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI) is incorrect, because the market will have anticipated those outsized returns and priced them in to the current share price.
Hm, I guess so, but wouldn’t all investing be value investing under this framing? (i.e. it’ll always be the case that when I make an investment, I’m expecting that the investment is a good deal / will increase in the future / the current price is “too low” given what I think the future will be like.)
We might be getting tripped up on semantics here.
(edited) I just saw your link above about growth vs value investing. I don’t think that’s a helpful distinction in this case, and when people talk about a company being undervalued I think that typically includes both unrecognised growth potential and unrecognised current value. (Maybe that’s less true for startups, but we’re talking about already-listed companies here).
I do think the core claim of “if AGI will be as big a deal as we think it’ll be, then the markets are systematically undervaluing AI companies” is a reasonable one, but the arguments you’ve given here aren’t precise enough to justify confidence, especially given the aforementioned need for caution. For example, premise 4 doesn’t actually follow directly from premise 3 because the returns could be large but not outsized compared with other investments. I think you can shore that link up, but not without contradicting your other point:
Which means (under the definition I’ve been using) that you’re not claiming that they’re undervalued.
I think it’s a spectrum:
Value stocks are where most of the case for investment is from the market is mis-pricing the firm’s current operations
Growth stocks are where most of the case for investment is from the future (expected) growth of the firm
Agreed; I clarified my position after Aidan pointed this out: (1, 2)