I don’t think my argument here is analogous to trying to beat the market. (i.e. I’m not arguing that AI research companies are currently undervalued.)
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
My interpretation of Premise 4 (“Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI”) is that Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities. This does not seem like a controversial claim since it is analogous to the stocks of biotech companies skyrocketing after good news is announced like regulatory approval to launch a drug. The market may have priced the probability of a company leading a slow AI takeoff in the next X years, but having that actually happen is an entirely different story.
The concept of investing in something that generates a lot of capital if something “bad” happens like investing in AI stocks with the goal of having a lot more capital to deploy if a suboptimal/dangerous AI takeoff occurs is known as “mission hedging.” EAs have covered this topic, for example Hauke’s 2018 article A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good. Mission hedging is currently a recommended research topic on Effective Thesis.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes. EAs will have different answers to this question depending on their model of how they can deploy funds now and in the future to impact the world.
The title of the article (“If slow-takeoff AGI is somewhat likely, don’t give now” at the time of writing) implies giving now is bad because mission hedging all of that money for the purpose of donating later will lead to better outcomes. I believe the article should be modified to indicate that EAs should evaluate employing mission hedging for part or all of their intended donations rather than suggest that putting all intended donations towards mission hedging and ceasing to donate now is an obviously better option. After all, a hedge is commonly known as a protective measure against certain outcomes, not the sole strategy at work.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes.
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.
Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities.
Perhaps that, but even if they don’t, the returns from a market-tracking index fund could be very high in the case of transformative AI.
I’m imagining two scenarios:
1. AI research progresses & AI companies start to have higher-than-average returns
2. AI research progresses & the returns from this trickle through the whole market (but AI companies don’t have higher-than-average returns)
A version of the argument applies to either scenario.
The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.
I totally might be misunderstanding your point, but here’s the contradiction as I see it. If you believe (A) the S&P500 doesn’t give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.
I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you’re hoping to predict the future of AI better than the market, I’d say the expected value of AI is already reflected in tech stock prices.
To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.
(A) the S&P500 doesn’t give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued..., then you cannot believe that (C) AI stock is a better investment opportunity than any other.
I’m engaging the question of whether to make substantial donations now or whether to save for later. I don’t have a strong view on what investments are the best savings vehicle, though I do have an intuition that the market is undervaluing the growth potential of AI-intensive companies.
So I suppose I disagree with both (A) and (B). I think the S&P 500 probably will generate high enough returns to justify investing instead of donations, and I think AI companies are somewhat undervalued.
To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued
We may be using different definitions of undervalued (see this comment). In the sense that I think AI companies are worth investing in because I think their stock price will be higher in future, I agree they’re “undervalued.”
But I don’t think they’re undervalued in the sense that the market is mis-valuing their current assets, etc. If their stock price is higher in the future, I’d expect this to be because they’ve made real productivity gains.
I don’t think my argument here is analogous to trying to beat the market. (i.e. I’m not arguing that AI research companies are currently undervalued.)
I’m saying that in slow-takeoff scenarios, AI research companies would have a ton of growth potential.
See growth vs. value investing.
Edit: clarified my view in this comment.
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
My interpretation of Premise 4 (“Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI”) is that Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities. This does not seem like a controversial claim since it is analogous to the stocks of biotech companies skyrocketing after good news is announced like regulatory approval to launch a drug. The market may have priced the probability of a company leading a slow AI takeoff in the next X years, but having that actually happen is an entirely different story.
The concept of investing in something that generates a lot of capital if something “bad” happens like investing in AI stocks with the goal of having a lot more capital to deploy if a suboptimal/dangerous AI takeoff occurs is known as “mission hedging.” EAs have covered this topic, for example Hauke’s 2018 article A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good. Mission hedging is currently a recommended research topic on Effective Thesis.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes. EAs will have different answers to this question depending on their model of how they can deploy funds now and in the future to impact the world.
The title of the article (“If slow-takeoff AGI is somewhat likely, don’t give now” at the time of writing) implies giving now is bad because mission hedging all of that money for the purpose of donating later will lead to better outcomes. I believe the article should be modified to indicate that EAs should evaluate employing mission hedging for part or all of their intended donations rather than suggest that putting all intended donations towards mission hedging and ceasing to donate now is an obviously better option. After all, a hedge is commonly known as a protective measure against certain outcomes, not the sole strategy at work.
I agree this makes more sense in terms of mission hedging
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.
Thanks for tying this to mission hedging – definitely seems related.
Perhaps that, but even if they don’t, the returns from a market-tracking index fund could be very high in the case of transformative AI.
I’m imagining two scenarios:
1. AI research progresses & AI companies start to have higher-than-average returns
2. AI research progresses & the returns from this trickle through the whole market (but AI companies don’t have higher-than-average returns)
A version of the argument applies to either scenario.
Clarified my view somewhat in this reply to Aidan.
The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.
I totally might be misunderstanding your point, but here’s the contradiction as I see it. If you believe (A) the S&P500 doesn’t give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.
I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you’re hoping to predict the future of AI better than the market, I’d say the expected value of AI is already reflected in tech stock prices.
To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.
I’m engaging the question of whether to make substantial donations now or whether to save for later. I don’t have a strong view on what investments are the best savings vehicle, though I do have an intuition that the market is undervaluing the growth potential of AI-intensive companies.
So I suppose I disagree with both (A) and (B). I think the S&P 500 probably will generate high enough returns to justify investing instead of donations, and I think AI companies are somewhat undervalued.
We may be using different definitions of undervalued (see this comment). In the sense that I think AI companies are worth investing in because I think their stock price will be higher in future, I agree they’re “undervalued.”
But I don’t think they’re undervalued in the sense that the market is mis-valuing their current assets, etc. If their stock price is higher in the future, I’d expect this to be because they’ve made real productivity gains.