(This is a draft I wrote in December 2021. I didn’t finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)
Thoughts on the OpenAI Strategy
OpenAI has one of the most audacious plans out there and I’m surprised at how little attention it’s gotten.
First, they say flat out that they’re going for AGI.
Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.
“Economic returns for investors and employees are capped… Any excess returns go to OpenAI Nonprofit… Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.”[1]
On Hacker News, one of their employees says,
“We believe that if we do create AGI, we’ll create orders of magnitude more value than any existing company.” [2]
You can read more about this mission on the charter:
“We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”[3]
This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing:
Make AGI
Turn AGI into huge profits
Give 100x returns to investors
Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit
Use AGI for “the benefit of all”?
I’m really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like.
Keep in mind that making AGI is a really big deal. If you’re the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries.
I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI).
This would be a massive power gain for a small subset of people.
If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI.
On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company.
And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors.
But, I’m sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven’t thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.
(Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it.
Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people.
Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible.
I assume it’s really hard to actually put together any reasonable plan now for Step 5.
My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious.
(This is a draft I wrote in December 2021. I didn’t finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)
Thoughts on the OpenAI Strategy
OpenAI has one of the most audacious plans out there and I’m surprised at how little attention it’s gotten.
First, they say flat out that they’re going for AGI.
Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.
On Hacker News, one of their employees says,
You can read more about this mission on the charter:
This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing:
Make AGI
Turn AGI into huge profits
Give 100x returns to investors
Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit
Use AGI for “the benefit of all”?
I’m really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like.
Keep in mind that making AGI is a really big deal. If you’re the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries.
I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI).
This would be a massive power gain for a small subset of people.
If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI.
On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company.
And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors.
But, I’m sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven’t thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.
(Aside on the details of Step 5)
I would love more information on Step 5, but I don’t blame OpenAI for not providing it.
Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people.
Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible.
I assume it’s really hard to actually put together any reasonable plan now for Step 5.
My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious.
[1] https://openai.com/blog/openai-lp/
[2] https://news.ycombinator.com/item?id=19360709
[3] https://openai.com/charter/
[4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom
[5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/
Also, see:
https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html
Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now.
https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm
https://moores.samaltman.com/
https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/