What did SFF or the funders you applied to or talked to say (insofar as you know/âare allowed to share)?
I am thinking a bit about adverse selection in longtermist grantmaking and how there are pros and cons to having many possible funders. Someone else not funding you could be evidence I/âothers shouldnât either, but conversely updating too much on what a small number of grantmakers think could lead to missing lots of great opportunities as a community.
Going to flag that a big chunk of the major funders and influencers in the EA/âLongtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)
Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I donât know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
Itâs well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
this conflict of interest is why Tallinn appoints others to make the actual grant decisions
(I donât think this is particularly true. I think the reason why Jaan chooses to appoint others to make grant decisions are mostly unrelated to this.)
He generally doesnât vote on any SFF grants (I donât know why, but would be surprised if itâs because of trying to minimize conflicts of interest).
I donât know if this analogy holds but that sounds a bit like how in certain news organizations, âlower downâ journalists self censorâthey do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiorsâ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anythingâI really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentivesâit set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding weekâif the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzleâI would be keen to support something like this myself.
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaanâs COIs and think his philanthropy would be worse/âless trustworthy?
Yeah apologies for the vague wordingâI guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COIâs impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with âno strings attachedâ to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.
Itâs no secret that AI Safety /â EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said:
Imagine if oil companies and environmental activists were both considered part of the broader âfossil fuel communityâ. Exxon and Shell would be âfossil fuel capabilitiesâ; Greenpeace and the Sierra Club would be âfossil fuel safetyââtwo equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same partiesâfossil fuel community partiesâand maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.
Adverse selection
What did SFF or the funders you applied to or talked to say (insofar as you know/âare allowed to share)?
I am thinking a bit about adverse selection in longtermist grantmaking and how there are pros and cons to having many possible funders. Someone else not funding you could be evidence I/âothers shouldnât either, but conversely updating too much on what a small number of grantmakers think could lead to missing lots of great opportunities as a community.
Going to flag that a big chunk of the major funders and influencers in the EA/âLongtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)
Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I donât know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
Itâs literally at the top of his Wikipedia page: https://ââen.m.wikipedia.org/ââwiki/ââJaan_Tallinn
What do you mean by âif this is trueâ? What is âthisâ?
Itâs well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
(I donât think this is particularly true. I think the reason why Jaan chooses to appoint others to make grant decisions are mostly unrelated to this.)
Doesnât he abstain voting on at least SFF grants himself because of this? Iâve heard that but youâd know better.
He generally doesnât vote on any SFF grants (I donât know why, but would be surprised if itâs because of trying to minimize conflicts of interest).
I donât know if this analogy holds but that sounds a bit like how in certain news organizations, âlower downâ journalists self censorâthey do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiorsâ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anythingâI really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentivesâit set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding weekâif the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzleâI would be keen to support something like this myself.
There are massive conflicts of interest. We need a divestment movement within AI Safety /â EA.
FYI, weirdly timely podcast episode out from FLI that includes discussion of CoIs in AI Safety.
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaanâs COIs and think his philanthropy would be worse/âless trustworthy?
Yeah apologies for the vague wordingâI guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COIâs impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with âno strings attachedâ to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.
Itâs no secret that AI Safety /â EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said:
Since we passed the speculation round, we will receive feedback on the application, but havenât yet. I will share what I can here when I get it.