Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
It depends on what exactly “losing the AI arms race” means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to “advance digital intelligence,” and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn’t particularly relevant to succeeding at the mission. But if they can’t stay competitive with Google et al., it’s questionable whether they can meaningfully achieve the goal of “advanc[ing] digital intelligence.”
So for instance, if OpenAI’s progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI’s current ability to fundraise with its non-profit structure exists but is not yet public.
(I found the language you quoted going back to 2015, so it’s probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)
To me, “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI’s old merge-and-assist clause, which indicates that they’d be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.
It depends on what exactly “losing the AI arms race” means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to “advance digital intelligence,” and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn’t particularly relevant to succeeding at the mission. But if they can’t stay competitive with Google et al., it’s questionable whether they can meaningfully achieve the goal of “advanc[ing] digital intelligence.”
So for instance, if OpenAI’s progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI’s current ability to fundraise with its non-profit structure exists but is not yet public.
(I found the language you quoted going back to 2015, so it’s probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)
To me, “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI’s old merge-and-assist clause, which indicates that they’d be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.