It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
I don’t know if this analogy holds but that sounds a bit like how in certain news organizations, “lower down” journalists self censor—they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors’ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything—I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives—it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week—if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle—I would be keen to support something like this myself.
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
(I don’t think this is particularly true. I think the reason why Jaan chooses to appoint others to make grant decisions are mostly unrelated to this.)
Doesn’t he abstain voting on at least SFF grants himself because of this? I’ve heard that but you’d know better.
He generally doesn’t vote on any SFF grants (I don’t know why, but would be surprised if it’s because of trying to minimize conflicts of interest).
I don’t know if this analogy holds but that sounds a bit like how in certain news organizations, “lower down” journalists self censor—they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors’ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything—I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives—it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week—if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle—I would be keen to support something like this myself.
There are massive conflicts of interest. We need a divestment movement within AI Safety / EA.
FYI, weirdly timely podcast episode out from FLI that includes discussion of CoIs in AI Safety.