That is shockingly little money for advocacy if the 2% figure is correct. Maybe itâs the right decision, this stuff is complex, but itâs hard to avoid being a bit suspicious that the fact that leading EAs (i.e. Moskovitz and Karnofsky for starters) make money if certain AI companies do well has something to do with our reluctance to fund pressuring AI companies to do things they donât want to do.
Iâll flag that the actual amount it potentially a bit larger (the 2% is my quick estimate just based on public, rather than private reports), but yeah either way itâs likely quite small.
FWIW I donât think itâs likely that potential profit is playing a role per say, but, put slightly differently, that some major players in the space are more bought into the idea that the AI companies can be responsible and thus we might be jumping the gun to begin lobbying for safety measures they donât see as productive.
Yeah, obviously Moskovitz is not motivated by personal greed or anything like that since he is giving most of his money away, and I doubt Karnofsky is primarily motivated by money in that sense either. And I think both of them have done great things, or I wouldnât be on this forum! But having your business do well is a point of pride for business people, it means more money *for your political and charitable goals*, and people also generally want their spousesâ business to succeed for reasons beyond âI want money for myself to buy nice stuffâ. (For people who donât know Karnofsky is married to Anthropicâs President, which also means the CEO is his brother-in-law.)
Ah okay yeah, the idea is that the success of the business itself is something theyâll be apt to really care about, and on top of that thereâs a huge upside for positive impact if thereâs financial success because they can then deploy that towards further charitable ends.
Do you know off the top of your head how big a stake Dustin has in Anthropic? I think the amount would play a significant role here.
I donât remember the size, but I was thinking Dustin still has Facebook shares also, and probably still wants Facebook to do well on some level. EDIT: Although itâs possible he has sold his Facebook shares since the last time I remember them being explicitly mentioned somewhere.
I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.
I should say that I donât actually think Open Philâs leadership are anything other than sincere in their beliefs and goals. The sort of bias I am talking about operates more subtly than that. (See also the claim often attributed to Chomskyâs Manufacturing Consent that the US media functions as pro-US, pro-business propaganda, but not because journalists are just responding to incentives in a narrow way, but because newspaper owners hire people who sincerely share their world view, which is common at elite universities (etc.) anyway.)
Thatâs a really interesting example, it does seem plausible to me that thereâs some selection pressure not just for more researchers but more AI-company-friendly views. What do you think would be other visible effects of a bias towards being friendly towards the AI companies?
I think that still leaves the question of why didnât Open Philanthropy (or any other big grantmakers besides SFF) fund CAIP. The original post identifies some missteps CAIP made but I also think most grantmakersâ aversion to x-risk advocacy played a big role.
That is shockingly little money for advocacy if the 2% figure is correct. Maybe itâs the right decision, this stuff is complex, but itâs hard to avoid being a bit suspicious that the fact that leading EAs (i.e. Moskovitz and Karnofsky for starters) make money if certain AI companies do well has something to do with our reluctance to fund pressuring AI companies to do things they donât want to do.
Iâll flag that the actual amount it potentially a bit larger (the 2% is my quick estimate just based on public, rather than private reports), but yeah either way itâs likely quite small.
FWIW I donât think itâs likely that potential profit is playing a role per say, but, put slightly differently, that some major players in the space are more bought into the idea that the AI companies can be responsible and thus we might be jumping the gun to begin lobbying for safety measures they donât see as productive.
Yeah, obviously Moskovitz is not motivated by personal greed or anything like that since he is giving most of his money away, and I doubt Karnofsky is primarily motivated by money in that sense either. And I think both of them have done great things, or I wouldnât be on this forum! But having your business do well is a point of pride for business people, it means more money *for your political and charitable goals*, and people also generally want their spousesâ business to succeed for reasons beyond âI want money for myself to buy nice stuffâ. (For people who donât know Karnofsky is married to Anthropicâs President, which also means the CEO is his brother-in-law.)
Ah okay yeah, the idea is that the success of the business itself is something theyâll be apt to really care about, and on top of that thereâs a huge upside for positive impact if thereâs financial success because they can then deploy that towards further charitable ends.
Do you know off the top of your head how big a stake Dustin has in Anthropic? I think the amount would play a significant role here.
I donât remember the size, but I was thinking Dustin still has Facebook shares also, and probably still wants Facebook to do well on some level. EDIT: Although itâs possible he has sold his Facebook shares since the last time I remember them being explicitly mentioned somewhere.
I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.
I should say that I donât actually think Open Philâs leadership are anything other than sincere in their beliefs and goals. The sort of bias I am talking about operates more subtly than that. (See also the claim often attributed to Chomskyâs Manufacturing Consent that the US media functions as pro-US, pro-business propaganda, but not because journalists are just responding to incentives in a narrow way, but because newspaper owners hire people who sincerely share their world view, which is common at elite universities (etc.) anyway.)
Thatâs a really interesting example, it does seem plausible to me that thereâs some selection pressure not just for more researchers but more AI-company-friendly views. What do you think would be other visible effects of a bias towards being friendly towards the AI companies?
I think that still leaves the question of why didnât Open Philanthropy (or any other big grantmakers besides SFF) fund CAIP. The original post identifies some missteps CAIP made but I also think most grantmakersâ aversion to x-risk advocacy played a big role.