The OP (original poster) should not have invoked “nepotism” here at all. The alleged nepotism here is seemingly not worse than a situation in which a person uses their own wealth to fund a Helena-like organization that they lead.
It’s important that such misplaced invocations of nepotism w.r.t. OpenPhil will not distract from concerns that are actually valid. In particular: OpenPhil recommended a $30M grant to OpenAI in a deal that involved HK[1] (then-CEO of OpenPhil) becoming a board member of OpenAI. This occurred no later than March 2017. Later, OpenAI appointed both HK’s then-fiancée and the fiancée’s sibling to VP positions[2].
I think the stated reasoning there by OP is that it’s important to influence OpenAI’s leadership’s stance and OpenAI’s work on AI existential safety. Do you think this is unreasonable?
To be fair I do think it makes a lot of sense to invoke nepotism here. I would be highly suspicious of the grant if I didn’t happen to place a lot of trust in Holden Karnofsky and OP.
I think the stated reasoning there by OP is that it’s important to influence OpenAI’s leadership’s stance and OpenAI’s work on AI existential safety. Do you think this is unreasonable?
I do not think that reasoning was unreasonable, but I also think that deciding to give $30M to OpenAI in 2017 was not obviously net-positive, and it might have been one of the most influential decisions in human history (e.g. due to potentially influencing timelines, takeoff speed and the research trajectory of AGI, due to potentially[1] inspiring many talented people to pursue/invest in AI, and due to potentially[1:1]increasing the number of actors who competitively pursue the development of AGI).
Therefore, the appointments of the fiancée and her sibling to VP positions, after OpenPhil’s decision to recommend that $30M grant, seems very problematic. I’m confident that HK consciously judged that $30M grant to be net-positive. But conflicts of interest can easily influence people’s decisions by biasing their judgment and via self-deception, especially with respect to decisions that are very non-obvious and deciding either way can be reasonable.
Furthermore, being appointed to VP at OpenAI seems very financially beneficial (in expectation) not just because of the salary from OpenAI. The appointments of the fiancée and her sibling to VP positions probably helped them to successfully approach investors as part of their effort to found Anthropic, which ended up raising at least $704M. HK said in an interview:
Anthropic is a new AI lab, and I am excited about it, but I have to temper that or not mislead people because Daniela, my wife, is the president of Anthropic. And that means that we have equity, and so [...] I’m as conflict-of-interest-y as I can be with this organization.
You wrote:
I would be highly suspicious of the grant if I didn’t happen to place a lot of trust in Holden Karnofsky and OP.
I don’t think there is a single human on earth whose judgement can be trusted to be resilient to severe conflicts of interest while making such influential non-obvious decisions (because such decisions can be easily influenced by biases and self-deception).
The lack of clarity around “nepotism” in the original post is unfortunate. I do think the source of funds is relevant to evaluating Helena, though. Funding through family or family connections doesn’t have the same information value as knowing disinterested third parties have decided to fund. And if someone made a ton of money and launched their own charity, we would know they brought whatever knowledge, skills, and abilities they used to make megabucks to their new profession.
The OP (original poster) should not have invoked “nepotism” here at all. The alleged nepotism here is seemingly not worse than a situation in which a person uses their own wealth to fund a Helena-like organization that they lead.
It’s important that such misplaced invocations of nepotism w.r.t. OpenPhil will not distract from concerns that are actually valid. In particular: OpenPhil recommended a $30M grant to OpenAI in a deal that involved HK[1] (then-CEO of OpenPhil) becoming a board member of OpenAI. This occurred no later than March 2017. Later, OpenAI appointed both HK’s then-fiancée and the fiancée’s sibling to VP positions[2].
This comment originally referred to “OP” instead of “HK” by mistake, due to me copying from another comment, sorry about that.
See these two LinkedIn profiles and the “Relationship disclosures” section in this OpenPhil writeup.
I think the stated reasoning there by OP is that it’s important to influence OpenAI’s leadership’s stance and OpenAI’s work on AI existential safety. Do you think this is unreasonable?
To be fair I do think it makes a lot of sense to invoke nepotism here. I would be highly suspicious of the grant if I didn’t happen to place a lot of trust in Holden Karnofsky and OP.
(feel free to not respond, I’m just curious)
I do not think that reasoning was unreasonable, but I also think that deciding to give $30M to OpenAI in 2017 was not obviously net-positive, and it might have been one of the most influential decisions in human history (e.g. due to potentially influencing timelines, takeoff speed and the research trajectory of AGI, due to potentially[1] inspiring many talented people to pursue/invest in AI, and due to potentially[1:1] increasing the number of actors who competitively pursue the development of AGI).
Therefore, the appointments of the fiancée and her sibling to VP positions, after OpenPhil’s decision to recommend that $30M grant, seems very problematic. I’m confident that HK consciously judged that $30M grant to be net-positive. But conflicts of interest can easily influence people’s decisions by biasing their judgment and via self-deception, especially with respect to decisions that are very non-obvious and deciding either way can be reasonable.
Furthermore, being appointed to VP at OpenAI seems very financially beneficial (in expectation) not just because of the salary from OpenAI. The appointments of the fiancée and her sibling to VP positions probably helped them to successfully approach investors as part of their effort to found Anthropic, which ended up raising at least $704M. HK said in an interview:
You wrote:
I don’t think there is a single human on earth whose judgement can be trusted to be resilient to severe conflicts of interest while making such influential non-obvious decisions (because such decisions can be easily influenced by biases and self-deception).
EDIT: added “potentially”; the $30M grant may not have caused those things if OpenAI would have sufficiently succeeded without it.
The lack of clarity around “nepotism” in the original post is unfortunate. I do think the source of funds is relevant to evaluating Helena, though. Funding through family or family connections doesn’t have the same information value as knowing disinterested third parties have decided to fund. And if someone made a ton of money and launched their own charity, we would know they brought whatever knowledge, skills, and abilities they used to make megabucks to their new profession.