Are you talking about OpenAI? Yeah. Many people on Twitter might have asked if you were investing in OpenAI.
Holden Karnofsky
I mean, you can look up our $30 million grant to OpenAI. I think it was back in 2016–– we wrote about some of the thinking behind it. Part of that grant was getting a board seat for Open Philanthropy for a few years so that we could help with their governance at a crucial early time in their development. I think some people believe that OpenAI has been net negative for the world because of the fact that they have contributed a lot to AI advancing and to AI being sort of hyped, and they think that gives us less time to prepare for it. However, I do think that all else being equal, AI advancing faster gives us less time to prepare. It is a bad thing, but I don’t think it’s the only consideration. I think OpenAI has done a number of good things too, and has set some important precedents. I think it’s probably much more interested in a lot of the issues I’m talking about and risks from advanced AI than like the company that I would guess would exist if they didn’t, would be doing similar things.
I don’t really accept that the idea that OpenAI is a negative force. I think it’s highly debatable. We could talk about it all day. If you look at our specific grant, it’s even a completely different thing because a lot of that was not just about boosting them, but about getting to be part of their early decision making. I think that was something that there were benefits and was important. My overall view is that I don’t look back on that grant as one of the better grants we’ve made, not one of the worse ones. But certainly we’ve done a lot of things that have had, you know, that have not worked out. I think there are some times shortly when we’ve done things that have consequences we didn’t intend. No philanthropist can be free of that. What we can try and do is be responsible, seriously do our homework to try to understand things beforehand, see the risks that we’re able to see, and think about how to minimize them.
Dwarkesh Patel recently asked Holden about this:
Dwarkesh Patel
Are you talking about OpenAI? Yeah. Many people on Twitter might have asked if you were investing in OpenAI.
Holden Karnofsky
I mean, you can look up our $30 million grant to OpenAI. I think it was back in 2016–– we wrote about some of the thinking behind it. Part of that grant was getting a board seat for Open Philanthropy for a few years so that we could help with their governance at a crucial early time in their development. I think some people believe that OpenAI has been net negative for the world because of the fact that they have contributed a lot to AI advancing and to AI being sort of hyped, and they think that gives us less time to prepare for it. However, I do think that all else being equal, AI advancing faster gives us less time to prepare. It is a bad thing, but I don’t think it’s the only consideration. I think OpenAI has done a number of good things too, and has set some important precedents. I think it’s probably much more interested in a lot of the issues I’m talking about and risks from advanced AI than like the company that I would guess would exist if they didn’t, would be doing similar things.
I don’t really accept that the idea that OpenAI is a negative force. I think it’s highly debatable. We could talk about it all day. If you look at our specific grant, it’s even a completely different thing because a lot of that was not just about boosting them, but about getting to be part of their early decision making. I think that was something that there were benefits and was important. My overall view is that I don’t look back on that grant as one of the better grants we’ve made, not one of the worse ones. But certainly we’ve done a lot of things that have had, you know, that have not worked out. I think there are some times shortly when we’ve done things that have consequences we didn’t intend. No philanthropist can be free of that. What we can try and do is be responsible, seriously do our homework to try to understand things beforehand, see the risks that we’re able to see, and think about how to minimize them.
Wow thanks so much.
Basically he seems very, very uncertain about whether it is positive or negative.
Very interesting