I followed this link, but I don’t understand what it has to do with regulatory capture. The linked thread seems to be about nepotistic hiring and conflicts of interest at/around OpenAI.
OpenPhil recommended a $30M grant to OpenAI in a deal that involved the OP (then-CEO of OpenPhil) becoming a board member of OpenAI. This occurred no later than March 2017. Later, OpenAI appointed both the OP’s then-fiancée and the fiancée’s sibling to VP positions. See thesetwo LinkedIn profiles and the “Relationship disclosures” section in this OpenPhil writeup.
It seems plausible that there was a causal link between the $30M grant and the appointment of the fiancée and her sibling to VP positions. OpenAI may have made these appointments while hoping to influence the OP’s behavior in his capacity as a board member of OpenAI who was seeking to influence safety and governance matters, as indicated in the following excerpt from OpenPhil’s writeup:
[...] the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.
Less importantly, see 30 seconds from this John Oliver monologue as evidence that companies sometimes suspiciously employ family members of regulators.
Thanks for explaining, but who are you considering to be the “regulator” who is “captured” in this story? I guess you are thinking of either OpenPhil or OpenAI’s board as the “regulator” of OpenAI. I’ve always heard the term “regulatory capture” in the context of companies capturing government regulators, but I guess it makes sense that it could be applied to other kinds of overseers of a company, such as its board or funder.
who are you considering to be the “regulator” who is “captured” in this story?
In the regulatory capture framing, the person who had a role equivalent to a regulator was the OP who joined OpenAI’s Board of Directors as part of an OpenPhil intervention to mitigate x-risks from AI. (OpenPhil publicly stated their motivation to “help play a role in OpenAI’s approach to safety and governance issues” in their writeup on the $30M grant to OpenAI).
I don’t believe #1 is correct. The Open Philanthropy grant is a small fraction of the funding OpenAI has received, and I don’t think it was crucial for OpenAI at any point.
I think #2 is fair insofar as running a scaling lab poses big risks to the world. I hope that OpenAI will avoid training or deploying directly dangerous systems; I think that even the deployments it’s done so far pose risks via hype and acceleration. (Considering the latter a risk to society is an unusual standard to hold a company to, but I think it’s appropriate here.)
#3 seems off to me—“regulatory capture” does not describe what’s at the link you gave (where’s the regulator?) At best it seems like a strained analogy, and even there it doesn’t seem right to me—I don’t know of any sense in which I or anyone else was “captured” by OpenAI.
I can’t comment on #4.
#5 seems off to me. I don’t know whether OpenAI uses nondisparagement agreements; I haven’t signed one. The reason I am careful with public statements about OpenAI is (a) it seems generally unproductive for me to talk carelessly in public about important organizations (likely to cause drama and drain the time and energy of me and others); (b) I am bound by confidentiality requirements, which are not the same as nondisparagement requirements. Information I have access to via having been on the board, or via being married to a former employee, is not mine to freely share.
Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:
Inspired/funded by EA
Taking big risks with other people’s lives/money
Attempt at regulatory capture
Large employee exodus due to safety/ethics/governance concerns
Lack of public details of concerns due in part to non-disparagement agreements
I followed this link, but I don’t understand what it has to do with regulatory capture. The linked thread seems to be about nepotistic hiring and conflicts of interest at/around OpenAI.
OpenPhil recommended a $30M grant to OpenAI in a deal that involved the OP (then-CEO of OpenPhil) becoming a board member of OpenAI. This occurred no later than March 2017. Later, OpenAI appointed both the OP’s then-fiancée and the fiancée’s sibling to VP positions. See these two LinkedIn profiles and the “Relationship disclosures” section in this OpenPhil writeup.
It seems plausible that there was a causal link between the $30M grant and the appointment of the fiancée and her sibling to VP positions. OpenAI may have made these appointments while hoping to influence the OP’s behavior in his capacity as a board member of OpenAI who was seeking to influence safety and governance matters, as indicated in the following excerpt from OpenPhil’s writeup:
Less importantly, see 30 seconds from this John Oliver monologue as evidence that companies sometimes suspiciously employ family members of regulators.
Thanks for explaining, but who are you considering to be the “regulator” who is “captured” in this story? I guess you are thinking of either OpenPhil or OpenAI’s board as the “regulator” of OpenAI. I’ve always heard the term “regulatory capture” in the context of companies capturing government regulators, but I guess it makes sense that it could be applied to other kinds of overseers of a company, such as its board or funder.
In the regulatory capture framing, the person who had a role equivalent to a regulator was the OP who joined OpenAI’s Board of Directors as part of an OpenPhil intervention to mitigate x-risks from AI. (OpenPhil publicly stated their motivation to “help play a role in OpenAI’s approach to safety and governance issues” in their writeup on the $30M grant to OpenAI).
An important difference is that OpenAI has been distancing itself from EA after the Anthropic split
Unlike FTX, OpenAI has now had a second wave of resignations in protest of insufficient safety focus.
I don’t believe #1 is correct. The Open Philanthropy grant is a small fraction of the funding OpenAI has received, and I don’t think it was crucial for OpenAI at any point.
I think #2 is fair insofar as running a scaling lab poses big risks to the world. I hope that OpenAI will avoid training or deploying directly dangerous systems; I think that even the deployments it’s done so far pose risks via hype and acceleration. (Considering the latter a risk to society is an unusual standard to hold a company to, but I think it’s appropriate here.)
#3 seems off to me—“regulatory capture” does not describe what’s at the link you gave (where’s the regulator?) At best it seems like a strained analogy, and even there it doesn’t seem right to me—I don’t know of any sense in which I or anyone else was “captured” by OpenAI.
I can’t comment on #4.
#5 seems off to me. I don’t know whether OpenAI uses nondisparagement agreements; I haven’t signed one. The reason I am careful with public statements about OpenAI is (a) it seems generally unproductive for me to talk carelessly in public about important organizations (likely to cause drama and drain the time and energy of me and others); (b) I am bound by confidentiality requirements, which are not the same as nondisparagement requirements. Information I have access to via having been on the board, or via being married to a former employee, is not mine to freely share.
Details about OpenAI’s nondisparagement agreements have come out.