I don’t think my argument here is analogous to trying to beat the market. (i.e. I’m not arguing that AI research companies are currently undervalued.)
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
My interpretation of Premise 4 (“Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI”) is that Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities. This does not seem like a controversial claim since it is analogous to the stocks of biotech companies skyrocketing after good news is announced like regulatory approval to launch a drug. The market may have priced the probability of a company leading a slow AI takeoff in the next X years, but having that actually happen is an entirely different story.
The concept of investing in something that generates a lot of capital if something “bad” happens like investing in AI stocks with the goal of having a lot more capital to deploy if a suboptimal/dangerous AI takeoff occurs is known as “mission hedging.” EAs have covered this topic, for example Hauke’s 2018 article A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good. Mission hedging is currently a recommended research topic on Effective Thesis.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes. EAs will have different answers to this question depending on their model of how they can deploy funds now and in the future to impact the world.
The title of the article (“If slow-takeoff AGI is somewhat likely, don’t give now” at the time of writing) implies giving now is bad because mission hedging all of that money for the purpose of donating later will lead to better outcomes. I believe the article should be modified to indicate that EAs should evaluate employing mission hedging for part or all of their intended donations rather than suggest that putting all intended donations towards mission hedging and ceasing to donate now is an obviously better option. After all, a hedge is commonly known as a protective measure against certain outcomes, not the sole strategy at work.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes.
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.
Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities.
Perhaps that, but even if they don’t, the returns from a market-tracking index fund could be very high in the case of transformative AI.
I’m imagining two scenarios:
1. AI research progresses & AI companies start to have higher-than-average returns
2. AI research progresses & the returns from this trickle through the whole market (but AI companies don’t have higher-than-average returns)
A version of the argument applies to either scenario.
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
My interpretation of Premise 4 (“Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI”) is that Milan is asserting a company that develops advanced AI capabilities in the future will likely generate higher returns than the stock market after it has developed these capabilities. This does not seem like a controversial claim since it is analogous to the stocks of biotech companies skyrocketing after good news is announced like regulatory approval to launch a drug. The market may have priced the probability of a company leading a slow AI takeoff in the next X years, but having that actually happen is an entirely different story.
The concept of investing in something that generates a lot of capital if something “bad” happens like investing in AI stocks with the goal of having a lot more capital to deploy if a suboptimal/dangerous AI takeoff occurs is known as “mission hedging.” EAs have covered this topic, for example Hauke’s 2018 article A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good. Mission hedging is currently a recommended research topic on Effective Thesis.
I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes. EAs will have different answers to this question depending on their model of how they can deploy funds now and in the future to impact the world.
The title of the article (“If slow-takeoff AGI is somewhat likely, don’t give now” at the time of writing) implies giving now is bad because mission hedging all of that money for the purpose of donating later will lead to better outcomes. I believe the article should be modified to indicate that EAs should evaluate employing mission hedging for part or all of their intended donations rather than suggest that putting all intended donations towards mission hedging and ceasing to donate now is an obviously better option. After all, a hedge is commonly known as a protective measure against certain outcomes, not the sole strategy at work.
I agree this makes more sense in terms of mission hedging
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.
Thanks for tying this to mission hedging – definitely seems related.
Perhaps that, but even if they don’t, the returns from a market-tracking index fund could be very high in the case of transformative AI.
I’m imagining two scenarios:
1. AI research progresses & AI companies start to have higher-than-average returns
2. AI research progresses & the returns from this trickle through the whole market (but AI companies don’t have higher-than-average returns)
A version of the argument applies to either scenario.
Clarified my view somewhat in this reply to Aidan.