Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaanâs COIs and think his philanthropy would be worse/âless trustworthy?
Yeah apologies for the vague wordingâI guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COIâs impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with âno strings attachedâ to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.
Itâs no secret that AI Safety /â EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said:
Imagine if oil companies and environmental activists were both considered part of the broader âfossil fuel communityâ. Exxon and Shell would be âfossil fuel capabilitiesâ; Greenpeace and the Sierra Club would be âfossil fuel safetyââtwo equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same partiesâfossil fuel community partiesâand maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaanâs COIs and think his philanthropy would be worse/âless trustworthy?
Yeah apologies for the vague wordingâI guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COIâs impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with âno strings attachedâ to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.
Itâs no secret that AI Safety /â EA is heavily invested in AI. It is kind of crazy that this is the case though. As Scott Alexander said: