Eliezer’s tweet is about the founding of OpenAI, whereas Agrippa’s comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil’s grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI’s work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Eliezer is claiming (Eliezer only has to compare a world where OpenAI didn’t exist vs the actual world where it does exist).
Personally, I agree with Eliezer that the founding of OpenAI was a terrible idea, but I am pretty uncertain about whether Open Phil’s grant was a good or bad idea. Given that OpenAI had already disrupted the “nascent spirit of cooperation” that Eliezer mentions and was going to do things, it seems plausible that buying a board seat for someone with quite a bit of understanding of AI risk is a good idea (though I can also see many reasons it could be a bad idea).
One can also argue that EA memes re AI risk led to the creation of OpenAI, and that therefore EA is net negative (see here for details). But if this is the argument Agrippa wants to make, then I am confused why they decided to link to the 2017 grant.
Has Holden written any updates on outcomes associated with the grant?
One can also argue that EA memes re AI risk led to the creation of OpenAI, and that therefore EA is net negative (see here for details). But if this is the argument Agrippa wants to make, then I am confused why they decided to link to the 2017 grant.
I am not making this argument but certainly I am alluding to it. EA strategy (weighted by impact) has been to do things that in actuality accelerate timelines, and even cooperate with doing so under the “have a good person standing nearby” theory.
I don’t think that lobbying against OpenAI, other adversarial action, would have been that hard. But OpenPhil and other EA leadership of the time decided to ally and hope for the best instead. This seems off the rails to me.
Has Holden written any updates on outcomes associated with the grant?
Not to my knowledge.
I don’t think that lobbying against OpenAI, other adversarial action, would have been that hard.
It seems like once OpenAI was created and had disrupted the “nascent spirit of cooperation”, even if OpenAI went away (like, the company and all its employees magically disappeared), the culture/people’s orientation to AI stuff (“which monkey gets the poison banana” etc.) wouldn’t have been reversible. So I don’t know if there was anything Open Phil could have done to OpenAI in 2017 to meaningfully change the situation in 2022 (other than like, slowing AI timelines by a bit). Or maybe you mean some more complicated plan like ‘adversarial action against OpenAI and any other AI labs that spring up later, and try to bring back the old spirit of cooperation, and get all the top people into DeepMind instead of spreading out among different labs’.
I don’t mean to say anything pro DeepMind and I’m not sure there is anything positive to say re: DeepMind.
I think that once the nascent spirit of cooperation is destroyed, you can indeed take the adversarial route. It’s not hard to imagine successful lobbying efforts that lead to regulation—most people are in fact skeptical of tech giants wielding tons of power using AI! Among other things known to slow progress and hinder organizations. It is beyond me why such things are so rarely discussed or considered. I’m sure that Open Phil and 80k open cooperation with OpenAI has a big part in shaping narrative away from this kind of thing.
Eliezer’s tweet is about the founding of OpenAI, whereas Agrippa’s comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil’s grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI’s work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Eliezer is claiming (Eliezer only has to compare a world where OpenAI didn’t exist vs the actual world where it does exist).
Personally, I agree with Eliezer that the founding of OpenAI was a terrible idea, but I am pretty uncertain about whether Open Phil’s grant was a good or bad idea. Given that OpenAI had already disrupted the “nascent spirit of cooperation” that Eliezer mentions and was going to do things, it seems plausible that buying a board seat for someone with quite a bit of understanding of AI risk is a good idea (though I can also see many reasons it could be a bad idea).
One can also argue that EA memes re AI risk led to the creation of OpenAI, and that therefore EA is net negative (see here for details). But if this is the argument Agrippa wants to make, then I am confused why they decided to link to the 2017 grant.
Has Holden written any updates on outcomes associated with the grant?
I am not making this argument but certainly I am alluding to it. EA strategy (weighted by impact) has been to do things that in actuality accelerate timelines, and even cooperate with doing so under the “have a good person standing nearby” theory.
I don’t think that lobbying against OpenAI, other adversarial action, would have been that hard. But OpenPhil and other EA leadership of the time decided to ally and hope for the best instead. This seems off the rails to me.
Not to my knowledge.
It seems like once OpenAI was created and had disrupted the “nascent spirit of cooperation”, even if OpenAI went away (like, the company and all its employees magically disappeared), the culture/people’s orientation to AI stuff (“which monkey gets the poison banana” etc.) wouldn’t have been reversible. So I don’t know if there was anything Open Phil could have done to OpenAI in 2017 to meaningfully change the situation in 2022 (other than like, slowing AI timelines by a bit). Or maybe you mean some more complicated plan like ‘adversarial action against OpenAI and any other AI labs that spring up later, and try to bring back the old spirit of cooperation, and get all the top people into DeepMind instead of spreading out among different labs’.
I don’t mean to say anything pro DeepMind and I’m not sure there is anything positive to say re: DeepMind.
I think that once the nascent spirit of cooperation is destroyed, you can indeed take the adversarial route. It’s not hard to imagine successful lobbying efforts that lead to regulation—most people are in fact skeptical of tech giants wielding tons of power using AI! Among other things known to slow progress and hinder organizations. It is beyond me why such things are so rarely discussed or considered. I’m sure that Open Phil and 80k open cooperation with OpenAI has a big part in shaping narrative away from this kind of thing.