Suspicion of Anthropic Silent Shadow (AKA “Sass” or “Sassy”)
“PREGISTRATION OF CRITICISM” (this isn’t the full criticism or solution but I don’t know when I will type it up):
The root issue of Sassy concerns is that a major realization of the EA interest in, and money entering AI, might be in the form of super high levels of funding to nascent entities. A major thread here is the straddling of these entities across the non-profit/for-profit boundary. Anthropic is one member of this class, but a number of other organizations are coming up.
(A brief sketch to give an impression of this level of funding is in this comment: “Next-level Next-level”).
The consequent effects of this funding are large and include casting a shadow on all recruiting and organization formation across EA. This is still true (maybe some are increased) if this is virtuous—if EAs are recruited for example, pulling EA talent into middle managers in AI orgs. There are positive effects too, such as high talent inflows. Importantly, most of these effects are silent.
As mentioned, a major thread is that the for-profit status of these organizations. Some complications of this status are important (but cerebral):
the “cost effectiveness” of these interventions could be infinitely positive
it introduces new theory of change of EA steering and leadership of relevant industries
a complete new theory of change related to TAI and takeoff, distinct from AI safety
However, the most immediate issue about for-profit status is venal. The slipperiness/porousness of straddling altruistic/profit projects, and the incentives related to this, might be bad and hard to manage. To be clear, I am worried about situations where for-profits wielding altruistic narratives, results in bad outcomes, much worse outcomes than just having a regular for-profit.
Regarding the amount of funding, it seems possible no situation like this exists in any non-profit ecosystem like EA in history (but we can probably find smaller instances where high quality non-profits are decapitated as their talent and processes flows to for-profits).
Note that Sassy criticism differs from, or is even opposed to most concerns about spending. For example, it views certain concerns about “conflict of interest” as irrelevant or even misguided and counterproductive ( EAs want closely aligned EAs together in leadership positions).
Solutions
Sass can’t be “stopped” now, and probably never could have been.
There are tangible things we can do that are robustly good:
Norms that involve frank communication about what people are doing when they get money or interest from EAs about these AI projects, this is good and interesting
A person whose explicit job is to check out what’s going on (and who is funded by an endowed fund for a period of time)
Note that both the above actions don’t need to have an adversarial character. Basically, it’s just leaning into the reality.
Conflicts of interest
Note that I have 4 conflicts of interests (basically, in the wrong way, that would normally cause a sane person not to write this):
I am funded by the relevant parties I am directly criticizing
I am a wannabe working on a for-profit language model thingy (so the very thing I am writing against)
I seek collaboration with people inside of these entities
I directly use several APIs and tools from the companies and even undocumented features and aid, which can be cut off
Finally, in theory, I know (non-EAs) people who want to invest in these “for profit” organizations, and writing this isn’t helping that deal flow
My collaborators read and cringe my forum comments
No wait, that’s actually six conflicts of interest.
So maybe “Next-level Next-level” will actually refer to the effects on my career, which is exciting.
Suspicion of Anthropic Silent Shadow (AKA “Sass” or “Sassy”)
“PREGISTRATION OF CRITICISM” (this isn’t the full criticism or solution but I don’t know when I will type it up):
The root issue of Sassy concerns is that a major realization of the EA interest in, and money entering AI, might be in the form of super high levels of funding to nascent entities. A major thread here is the straddling of these entities across the non-profit/for-profit boundary. Anthropic is one member of this class, but a number of other organizations are coming up.
(A brief sketch to give an impression of this level of funding is in this comment: “Next-level Next-level”).
The consequent effects of this funding are large and include casting a shadow on all recruiting and organization formation across EA. This is still true (maybe some are increased) if this is virtuous—if EAs are recruited for example, pulling EA talent into middle managers in AI orgs. There are positive effects too, such as high talent inflows. Importantly, most of these effects are silent.
As mentioned, a major thread is that the for-profit status of these organizations. Some complications of this status are important (but cerebral):
the “cost effectiveness” of these interventions could be infinitely positive
it introduces new theory of change of EA steering and leadership of relevant industries
a complete new theory of change related to TAI and takeoff, distinct from AI safety
However, the most immediate issue about for-profit status is venal. The slipperiness/porousness of straddling altruistic/profit projects, and the incentives related to this, might be bad and hard to manage. To be clear, I am worried about situations where for-profits wielding altruistic narratives, results in bad outcomes, much worse outcomes than just having a regular for-profit.
Regarding the amount of funding, it seems possible no situation like this exists in any non-profit ecosystem like EA in history (but we can probably find smaller instances where high quality non-profits are decapitated as their talent and processes flows to for-profits).
Note that Sassy criticism differs from, or is even opposed to most concerns about spending. For example, it views certain concerns about “conflict of interest” as irrelevant or even misguided and counterproductive ( EAs want closely aligned EAs together in leadership positions).
Solutions
Sass can’t be “stopped” now, and probably never could have been.
There are tangible things we can do that are robustly good:
Norms that involve frank communication about what people are doing when they get money or interest from EAs about these AI projects, this is good and interesting
A person whose explicit job is to check out what’s going on (and who is funded by an endowed fund for a period of time)
Note that both the above actions don’t need to have an adversarial character. Basically, it’s just leaning into the reality.
Conflicts of interest
Note that I have 4 conflicts of interests (basically, in the wrong way, that would normally cause a sane person not to write this):
I am funded by the relevant parties I am directly criticizing
I am a wannabe working on a for-profit language model thingy (so the very thing I am writing against)
I seek collaboration with people inside of these entities
I directly use several APIs and tools from the companies and even undocumented features and aid, which can be cut off
Finally, in theory, I know (non-EAs) people who want to invest in these “for profit” organizations, and writing this isn’t helping that deal flow
My collaborators read and cringe my forum comments
No wait, that’s actually six conflicts of interest.
So maybe “Next-level Next-level” will actually refer to the effects on my career, which is exciting.