They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn’t viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn’t seem necessarily unreasonable under general charitable-law principles to me
I’m confused about this line of argument. Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
I tried to find the original mission statement. Is the following correct?
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
If so, I can see how an OpenAI plantiff can try to argue that “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” necessitates them “winning the AI arms race”, but I don’t exactly see why an impartial observer should grant them that.
Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
It depends on what exactly “losing the AI arms race” means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to “advance digital intelligence,” and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn’t particularly relevant to succeeding at the mission. But if they can’t stay competitive with Google et al., it’s questionable whether they can meaningfully achieve the goal of “advanc[ing] digital intelligence.”
So for instance, if OpenAI’s progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI’s current ability to fundraise with its non-profit structure exists but is not yet public.
(I found the language you quoted going back to 2015, so it’s probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)
To me, “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI’s old merge-and-assist clause, which indicates that they’d be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.
I’m confused about this line of argument. Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
I tried to find the original mission statement. Is the following correct?
If so, I can see how an OpenAI plantiff can try to argue that “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” necessitates them “winning the AI arms race”, but I don’t exactly see why an impartial observer should grant them that.
It depends on what exactly “losing the AI arms race” means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to “advance digital intelligence,” and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn’t particularly relevant to succeeding at the mission. But if they can’t stay competitive with Google et al., it’s questionable whether they can meaningfully achieve the goal of “advanc[ing] digital intelligence.”
So for instance, if OpenAI’s progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI’s current ability to fundraise with its non-profit structure exists but is not yet public.
(I found the language you quoted going back to 2015, so it’s probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)
To me, “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI’s old merge-and-assist clause, which indicates that they’d be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.