Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I don’t know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
I don’t know if this analogy holds but that sounds a bit like how in certain news organizations, “lower down” journalists self censor—they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors’ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything—I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives—it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week—if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle—I would be keen to support something like this myself.
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaan’s COIs and think his philanthropy would be worse/less trustworthy?
Yeah apologies for the vague wording—I guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COI’s impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with “no strings attached” to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.
It’s no secret that AI Safety / EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said:
Imagine if oil companies and environmental activists were both considered part of the broader “fossil fuel community”. Exxon and Shell would be “fossil fuel capabilities”; Greenpeace and the Sierra Club would be “fossil fuel safety”—two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties—fossil fuel community parties—and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.
Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I don’t know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
It’s literally at the top of his Wikipedia page: https://en.m.wikipedia.org/wiki/Jaan_Tallinn
What do you mean by “if this is true”? What is “this”?
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
(I don’t think this is particularly true. I think the reason why Jaan chooses to appoint others to make grant decisions are mostly unrelated to this.)
Doesn’t he abstain voting on at least SFF grants himself because of this? I’ve heard that but you’d know better.
He generally doesn’t vote on any SFF grants (I don’t know why, but would be surprised if it’s because of trying to minimize conflicts of interest).
I don’t know if this analogy holds but that sounds a bit like how in certain news organizations, “lower down” journalists self censor—they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors’ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything—I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives—it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week—if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle—I would be keen to support something like this myself.
There are massive conflicts of interest. We need a divestment movement within AI Safety / EA.
FYI, weirdly timely podcast episode out from FLI that includes discussion of CoIs in AI Safety.
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaan’s COIs and think his philanthropy would be worse/less trustworthy?
Yeah apologies for the vague wording—I guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COI’s impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with “no strings attached” to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.
It’s no secret that AI Safety / EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said: