In the past two years, the technical alignment organisations which have received substantial funding include
Your post does not actually say this, but when I read it I thought you were saying that these are all the organizations that have received major funding in technical alignment. I think it would have been clearer if you had said “include the following organizations based in the San Francisco Bay Area:” to make it clearer you’re discussing a subset.
Anyway, here are the public numbers, for those curious, of $1 million+ grants in technical AI safety in 2021 and 2022 (ordered by total size) made by Open Philanthropy:
Redwood Research: $9.4 million, and then another grant for $10.7 million
Berkeley Existential Risk Initiative—CHAI collaboration: $1.1 million
Berkeley Existential Risk Initiative—SERI MATS Program: $1 million
The Alignment Research Center received much less: $265,000.
There isn’t actually any public grant saying that Open Phil funded Anthropic. However, that isn’t to say that they couldn’t have made a non-public grant. It was public that FTX funded Anthropic.
having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary
Based on spending some time in Berkeley, I think a more accurate way to describe this is as follows:
People who care about AI safety and are involved in EA tend to move to Berkeley because that is where everyone else is. It really can increase your productivity if you can easily interact with others working in your field and know what is going on, or so the established wisdom goes. The people who have been around the longest are often leading research organizations or are grantmakers at Open Phil. They go to the same parties, have the same friends, work in the same offices, and often spend nearly all of their time working with little time to socialize with anyone outside their community. Unless they make a special effort to avoid dating anyone in their social community, they may end up dating a grantmaker.
If we want these conflicts of interest to go away, we could try simply saying it should be a norm for Open Phil not to grant to organizations with possible conflicts of interest. But knowing the Berkeley social scene, this means that many Open Phil grantmakers wouldn’t be able to date anyone in their social circles, since basically everyone in their social circles is receiving money from Open Phil.
The real question is as you say one of structure: whether so many of the EA-aligned AI safety organizations should be headquartered in close proximity and whether EAs should live together and be friends with basically only other EAs. That’s the dynamic that created the conflicts. I don’t think the answer to this is extremely obvious, but I don’t really feel like trying to argue both sides of it right now.
It’s possibly true that regrantors would reduce this effect in grantmaking, because you could designate regrantors in other places or who have different friends. But my suspicion would be that regrantors would by default be the same people who are already receiving grants.
Speculating, conditional on the pitchbook data being correct, I don’t think that Moskovitz funded Anthropic because his object level beliefs about their value or because they’re such good pals, rather I’m guessing he received a recommendation from Open Philanthropy, even if Open Philanthropy wasn’t the vehicle he used to transfer the funds.
Also note that Luke Muehlhauser is part of the board of Anthropic; see footnote 4 here.
Your post does not actually say this, but when I read it I thought you were saying that these are all the organizations that have received major funding in technical alignment. I think it would have been clearer if you had said “include the following organizations based in the San Francisco Bay Area:” to make it clearer you’re discussing a subset.
Anyway, here are the public numbers, for those curious, of $1 million+ grants in technical AI safety in 2021 and 2022 (ordered by total size) made by Open Philanthropy:
Redwood Research: $9.4 million, and then another grant for $10.7 million
Many professors at a lot of universities: $14.4 million
CHAI: $11.3 million
Aleksander Madry at MIT: $1.4 million
Hofvarpnir Studios: $1.4 million
Berkeley Existential Risk Initiative—CHAI collaboration: $1.1 million
Berkeley Existential Risk Initiative—SERI MATS Program: $1 million
The Alignment Research Center received much less: $265,000.
There isn’t actually any public grant saying that Open Phil funded Anthropic. However, that isn’t to say that they couldn’t have made a non-public grant. It was public that FTX funded Anthropic.
Based on spending some time in Berkeley, I think a more accurate way to describe this is as follows:
People who care about AI safety and are involved in EA tend to move to Berkeley because that is where everyone else is. It really can increase your productivity if you can easily interact with others working in your field and know what is going on, or so the established wisdom goes. The people who have been around the longest are often leading research organizations or are grantmakers at Open Phil. They go to the same parties, have the same friends, work in the same offices, and often spend nearly all of their time working with little time to socialize with anyone outside their community. Unless they make a special effort to avoid dating anyone in their social community, they may end up dating a grantmaker.
If we want these conflicts of interest to go away, we could try simply saying it should be a norm for Open Phil not to grant to organizations with possible conflicts of interest. But knowing the Berkeley social scene, this means that many Open Phil grantmakers wouldn’t be able to date anyone in their social circles, since basically everyone in their social circles is receiving money from Open Phil.
The real question is as you say one of structure: whether so many of the EA-aligned AI safety organizations should be headquartered in close proximity and whether EAs should live together and be friends with basically only other EAs. That’s the dynamic that created the conflicts. I don’t think the answer to this is extremely obvious, but I don’t really feel like trying to argue both sides of it right now.
It’s possibly true that regrantors would reduce this effect in grantmaking, because you could designate regrantors in other places or who have different friends. But my suspicion would be that regrantors would by default be the same people who are already receiving grants.
I was looking into this topic, and found this source:
Speculating, conditional on the pitchbook data being correct, I don’t think that Moskovitz funded Anthropic because his object level beliefs about their value or because they’re such good pals, rather I’m guessing he received a recommendation from Open Philanthropy, even if Open Philanthropy wasn’t the vehicle he used to transfer the funds.
Also note that Luke Muehlhauser is part of the board of Anthropic; see footnote 4 here.