Relatedly, it’s also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with—or at least are consonant with—their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there’s not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors’ personal financial interests could bias the community’s actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But—one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don’t write posts and comments in support of stop/pause advocacy because they don’t want to irritate the new funders. Maybe grantmakers don’t recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There’s also a risk of losing public credibility—it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Orgs could commit to identifying whether, and how much, of their funding comes from significantly AI-involved sources.
Many orgs could have a limit on the percentage of their budget they will accept from significantly AI-involved sources. Some orgs—those with particular sensitivity on AI knowledge and policy—should probably avoid any major gifts from AI-involved sources at all.
Particularly sensitive orgs could be granted extended runways and/or funding agreements with some sort of independent protection against non-renewal.
Other donors could provide more funding for red-teaming AI work, especially work that potentially affected AI-involved source donors.
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.
Thanks for restarting this conversation!
Relatedly, it’s also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with—or at least are consonant with—their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there’s not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors’ personal financial interests could bias the community’s actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But—one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don’t write posts and comments in support of stop/pause advocacy because they don’t want to irritate the new funders. Maybe grantmakers don’t recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There’s also a risk of losing public credibility—it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Orgs could commit to identifying whether, and how much, of their funding comes from significantly AI-involved sources.
Many orgs could have a limit on the percentage of their budget they will accept from significantly AI-involved sources. Some orgs—those with particular sensitivity on AI knowledge and policy—should probably avoid any major gifts from AI-involved sources at all.
Particularly sensitive orgs could be granted extended runways and/or funding agreements with some sort of independent protection against non-renewal.
Other donors could provide more funding for red-teaming AI work, especially work that potentially affected AI-involved source donors.
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.