I think I broadly agree with this decomposition (not that I know much about the field or anything). Some specific disagreements:
Under the right circumstances, investing in AI might generate some insane amount of utility like 10^20 times bigger than the current value of earth. Not sure how to think about that. Does the EV calculation say to invest in AI for that small chance of a 10^20 gain?
I think basically (within the space of longtermist interventions) a lot of these concerns approximately add up to normality. Investing in AI might generate 10^20 utility than the current value of Earth, sure, but most plausible x-risk interventions will be within a small number of OOMs of this as well, as well a fair number of longtermist movement building interventions.
Let our investment thesis be:
There will be a slow takeoff
Takeoff will start soon enough before singularity that we have time to spend most of our money on things we care about
Takeoff will be driven by publicly-traded companies
Re 2: Maybe you’re already thinking of this (otherwise “50% chance that takeoff happens slowly enough for us to spend most of our money” feels a bit high), but one thing to keep in mind is that we’re still operating in a world where markets are mostly rational. The investment thesis implicitly is betting on EAs knowing an open “secret” about the world (specifically, the rest of the world undervalues AI in the medium-long term). However, this doesn’t mean the financial world will keep being “irrational” (by our lights) about AI. We might expect this secret to become apparent to the rest of the world well before AI is actually contributing to speeding up GDP doublings in the technical “slow takeoff” ways.
Unfortunately timing the market is famously hard and I’m not sure there’s a reasonable way to model this (even for people who legitimately know secrets, pricing seems a lot easier than timing). So I don’t have great ideas of how to model “when will people wake up to AI, conditional upon slow-takeoff EAs being right about AI.” Though I have a few mediocre ideas like starting with an ignorance prior and for you to interview EAs in hedge funds to see if they have the relevant psychological insights.
I think I broadly agree with this decomposition (not that I know much about the field or anything). Some specific disagreements:
I think basically (within the space of longtermist interventions) a lot of these concerns approximately add up to normality. Investing in AI might generate 10^20 utility than the current value of Earth, sure, but most plausible x-risk interventions will be within a small number of OOMs of this as well, as well a fair number of longtermist movement building interventions.
Re 2: Maybe you’re already thinking of this (otherwise “50% chance that takeoff happens slowly enough for us to spend most of our money” feels a bit high), but one thing to keep in mind is that we’re still operating in a world where markets are mostly rational. The investment thesis implicitly is betting on EAs knowing an open “secret” about the world (specifically, the rest of the world undervalues AI in the medium-long term). However, this doesn’t mean the financial world will keep being “irrational” (by our lights) about AI. We might expect this secret to become apparent to the rest of the world well before AI is actually contributing to speeding up GDP doublings in the technical “slow takeoff” ways.
Unfortunately timing the market is famously hard and I’m not sure there’s a reasonable way to model this (even for people who legitimately know secrets, pricing seems a lot easier than timing). So I don’t have great ideas of how to model “when will people wake up to AI, conditional upon slow-takeoff EAs being right about AI.” Though I have a few mediocre ideas like starting with an ignorance prior and for you to interview EAs in hedge funds to see if they have the relevant psychological insights.