I don’t know if this analogy holds but that sounds a bit like how in certain news organizations, “lower down” journalists self censor—they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors’ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything—I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives—it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week—if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle—I would be keen to support something like this myself.
I don’t know if this analogy holds but that sounds a bit like how in certain news organizations, “lower down” journalists self censor—they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors’ reactions to their work. And I think if that is actually going on it might not even be conscious.
I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything—I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc.
In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives—it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week—if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle—I would be keen to support something like this myself.
There are massive conflicts of interest. We need a divestment movement within AI Safety / EA.
FYI, weirdly timely podcast episode out from FLI that includes discussion of CoIs in AI Safety.