What did SFF or the funders you applied to or talked to say (insofar as you know/are allowed to share)?
I am thinking a bit about adverse selection in longtermist grantmaking and how there are pros and cons to having many possible funders. Someone else not funding you could be evidence I/others shouldn’t either, but conversely updating too much on what a small number of grantmakers think could lead to missing lots of great opportunities as a community.
Going to flag that a big chunk of the major funders and influencers in the EA/Longtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)
Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I don’t know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaan’s COIs and think his philanthropy would be worse/less trustworthy?
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
Adverse selection
What did SFF or the funders you applied to or talked to say (insofar as you know/are allowed to share)?
I am thinking a bit about adverse selection in longtermist grantmaking and how there are pros and cons to having many possible funders. Someone else not funding you could be evidence I/others shouldn’t either, but conversely updating too much on what a small number of grantmakers think could lead to missing lots of great opportunities as a community.
Going to flag that a big chunk of the major funders and influencers in the EA/Longtermist community have personal investments in AGI companies, so this could be a factor in lack of funding for work aimed at slowing down AGI development. I think that as a community, we should be divesting (and investing in PauseAI instead!)
Jaan Tallinn, who funds SFF, has invested in DeepMind and Anthropic. I don’t know if this is relevant because AFAIK Tallinn does not make funding decisions for SFF (although presumably he has veto power).
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
What do you mean by “if this is true”? What is “this”?
It’s literally at the top of his Wikipedia page: https://en.m.wikipedia.org/wiki/Jaan_Tallinn
Could you spell out why you think this information would be super valuable? I assume something like you would worry about Jaan’s COIs and think his philanthropy would be worse/less trustworthy?
It’s well-known to be true that Tallinn is an investor in AGI companies, and this conflict of interest is why Tallinn appoints others to make the actual grant decisions. But those others may be more biased in favor of industry than they realize (as I happen to believe most of the traditional AI Safety community is).
Since we passed the speculation round, we will receive feedback on the application, but haven’t yet. I will share what I can here when I get it.