I don’t believe #1 is correct. The Open Philanthropy grant is a small fraction of the funding OpenAI has received, and I don’t think it was crucial for OpenAI at any point.
I think #2 is fair insofar as running a scaling lab poses big risks to the world. I hope that OpenAI will avoid training or deploying directly dangerous systems; I think that even the deployments it’s done so far pose risks via hype and acceleration. (Considering the latter a risk to society is an unusual standard to hold a company to, but I think it’s appropriate here.)
#3 seems off to me—“regulatory capture” does not describe what’s at the link you gave (where’s the regulator?) At best it seems like a strained analogy, and even there it doesn’t seem right to me—I don’t know of any sense in which I or anyone else was “captured” by OpenAI.
I can’t comment on #4.
#5 seems off to me. I don’t know whether OpenAI uses nondisparagement agreements; I haven’t signed one. The reason I am careful with public statements about OpenAI is (a) it seems generally unproductive for me to talk carelessly in public about important organizations (likely to cause drama and drain the time and energy of me and others); (b) I am bound by confidentiality requirements, which are not the same as nondisparagement requirements. Information I have access to via having been on the board, or via being married to a former employee, is not mine to freely share.
I don’t believe #1 is correct. The Open Philanthropy grant is a small fraction of the funding OpenAI has received, and I don’t think it was crucial for OpenAI at any point.
I think #2 is fair insofar as running a scaling lab poses big risks to the world. I hope that OpenAI will avoid training or deploying directly dangerous systems; I think that even the deployments it’s done so far pose risks via hype and acceleration. (Considering the latter a risk to society is an unusual standard to hold a company to, but I think it’s appropriate here.)
#3 seems off to me—“regulatory capture” does not describe what’s at the link you gave (where’s the regulator?) At best it seems like a strained analogy, and even there it doesn’t seem right to me—I don’t know of any sense in which I or anyone else was “captured” by OpenAI.
I can’t comment on #4.
#5 seems off to me. I don’t know whether OpenAI uses nondisparagement agreements; I haven’t signed one. The reason I am careful with public statements about OpenAI is (a) it seems generally unproductive for me to talk carelessly in public about important organizations (likely to cause drama and drain the time and energy of me and others); (b) I am bound by confidentiality requirements, which are not the same as nondisparagement requirements. Information I have access to via having been on the board, or via being married to a former employee, is not mine to freely share.
Details about OpenAI’s nondisparagement agreements have come out.