I think the stated reasoning there by OP is that it’s important to influence OpenAI’s leadership’s stance and OpenAI’s work on AI existential safety. Do you think this is unreasonable?
I do not think that reasoning was unreasonable, but I also think that deciding to give $30M to OpenAI in 2017 was not obviously net-positive, and it might have been one of the most influential decisions in human history (e.g. due to potentially influencing timelines, takeoff speed and the research trajectory of AGI, due to potentially[1] inspiring many talented people to pursue/invest in AI, and due to potentially[1:1]increasing the number of actors who competitively pursue the development of AGI).
Therefore, the appointments of the fiancée and her sibling to VP positions, after OpenPhil’s decision to recommend that $30M grant, seems very problematic. I’m confident that HK consciously judged that $30M grant to be net-positive. But conflicts of interest can easily influence people’s decisions by biasing their judgment and via self-deception, especially with respect to decisions that are very non-obvious and deciding either way can be reasonable.
Furthermore, being appointed to VP at OpenAI seems very financially beneficial (in expectation) not just because of the salary from OpenAI. The appointments of the fiancée and her sibling to VP positions probably helped them to successfully approach investors as part of their effort to found Anthropic, which ended up raising at least $704M. HK said in an interview:
Anthropic is a new AI lab, and I am excited about it, but I have to temper that or not mislead people because Daniela, my wife, is the president of Anthropic. And that means that we have equity, and so [...] I’m as conflict-of-interest-y as I can be with this organization.
You wrote:
I would be highly suspicious of the grant if I didn’t happen to place a lot of trust in Holden Karnofsky and OP.
I don’t think there is a single human on earth whose judgement can be trusted to be resilient to severe conflicts of interest while making such influential non-obvious decisions (because such decisions can be easily influenced by biases and self-deception).
I do not think that reasoning was unreasonable, but I also think that deciding to give $30M to OpenAI in 2017 was not obviously net-positive, and it might have been one of the most influential decisions in human history (e.g. due to potentially influencing timelines, takeoff speed and the research trajectory of AGI, due to potentially[1] inspiring many talented people to pursue/invest in AI, and due to potentially[1:1] increasing the number of actors who competitively pursue the development of AGI).
Therefore, the appointments of the fiancée and her sibling to VP positions, after OpenPhil’s decision to recommend that $30M grant, seems very problematic. I’m confident that HK consciously judged that $30M grant to be net-positive. But conflicts of interest can easily influence people’s decisions by biasing their judgment and via self-deception, especially with respect to decisions that are very non-obvious and deciding either way can be reasonable.
Furthermore, being appointed to VP at OpenAI seems very financially beneficial (in expectation) not just because of the salary from OpenAI. The appointments of the fiancée and her sibling to VP positions probably helped them to successfully approach investors as part of their effort to found Anthropic, which ended up raising at least $704M. HK said in an interview:
You wrote:
I don’t think there is a single human on earth whose judgement can be trusted to be resilient to severe conflicts of interest while making such influential non-obvious decisions (because such decisions can be easily influenced by biases and self-deception).
EDIT: added “potentially”; the $30M grant may not have caused those things if OpenAI would have sufficiently succeeded without it.