I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.
I should say that I donāt actually think Open Philās leadership are anything other than sincere in their beliefs and goals. The sort of bias I am talking about operates more subtly than that. (See also the claim often attributed to Chomskyās Manufacturing Consent that the US media functions as pro-US, pro-business propaganda, but not because journalists are just responding to incentives in a narrow way, but because newspaper owners hire people who sincerely share their world view, which is common at elite universities (etc.) anyway.)
Thatās a really interesting example, it does seem plausible to me that thereās some selection pressure not just for more researchers but more AI-company-friendly views. What do you think would be other visible effects of a bias towards being friendly towards the AI companies?
I think that still leaves the question of why didnāt Open Philanthropy (or any other big grantmakers besides SFF) fund CAIP. The original post identifies some missteps CAIP made but I also think most grantmakersā aversion to x-risk advocacy played a big role.
I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.
I should say that I donāt actually think Open Philās leadership are anything other than sincere in their beliefs and goals. The sort of bias I am talking about operates more subtly than that. (See also the claim often attributed to Chomskyās Manufacturing Consent that the US media functions as pro-US, pro-business propaganda, but not because journalists are just responding to incentives in a narrow way, but because newspaper owners hire people who sincerely share their world view, which is common at elite universities (etc.) anyway.)
Thatās a really interesting example, it does seem plausible to me that thereās some selection pressure not just for more researchers but more AI-company-friendly views. What do you think would be other visible effects of a bias towards being friendly towards the AI companies?
I think that still leaves the question of why didnāt Open Philanthropy (or any other big grantmakers besides SFF) fund CAIP. The original post identifies some missteps CAIP made but I also think most grantmakersā aversion to x-risk advocacy played a big role.