Alrighty, not sure how this contest works or what is going on, but I’ve got content to add in this thread!
My content might be different because I don’t see it as “red teaming”. I think “red teaming” is criticism that tends to be opposed to the issues. While wildly aggressive, I think that I accept the underlying goals and to try to improve them systemically. For example, by finishing with constructive, specific suggestions.
Also, I think my content is different because it’s not circling the same topics (like, I don’t see anyone else writing these ideas or solutions).
Finally, everything will be themed with Nirvana. Please play the following song (Sliver)
Without directly confronting you (it’s wrong and not acceptable), and writing in an impartial voice:
These pictures and videos are a deliberate comment/critique on the hidden effects of current aesthetics and norms of discussion.
Here, your reaction is being intentionally provoked, because they are further illustrations of what the critique views as defective: the prioritization of aesthetics over content. (Any number of the points being made, about for-profit entities, alternative theories of change, seem monumental, even if half true, but “We’ll downvote them because of a picture”.)
Suspicion of Anthropic Silent Shadow (AKA “Sass” or “Sassy”)
“PREGISTRATION OF CRITICISM” (this isn’t the full criticism or solution but I don’t know when I will type it up):
The root issue of Sassy concerns is that a major realization of the EA interest in, and money entering AI, might be in the form of super high levels of funding to nascent entities. A major thread here is the straddling of these entities across the non-profit/for-profit boundary. Anthropic is one member of this class, but a number of other organizations are coming up.
(A brief sketch to give an impression of this level of funding is in this comment: “Next-level Next-level”).
The consequent effects of this funding are large and include casting a shadow on all recruiting and organization formation across EA. This is still true (maybe some are increased) if this is virtuous—if EAs are recruited for example, pulling EA talent into middle managers in AI orgs. There are positive effects too, such as high talent inflows. Importantly, most of these effects are silent.
As mentioned, a major thread is that the for-profit status of these organizations. Some complications of this status are important (but cerebral):
the “cost effectiveness” of these interventions could be infinitely positive
it introduces new theory of change of EA steering and leadership of relevant industries
a complete new theory of change related to TAI and takeoff, distinct from AI safety
However, the most immediate issue about for-profit status is venal. The slipperiness/porousness of straddling altruistic/profit projects, and the incentives related to this, might be bad and hard to manage. To be clear, I am worried about situations where for-profits wielding altruistic narratives, results in bad outcomes, much worse outcomes than just having a regular for-profit.
Regarding the amount of funding, it seems possible no situation like this exists in any non-profit ecosystem like EA in history (but we can probably find smaller instances where high quality non-profits are decapitated as their talent and processes flows to for-profits).
Note that Sassy criticism differs from, or is even opposed to most concerns about spending. For example, it views certain concerns about “conflict of interest” as irrelevant or even misguided and counterproductive ( EAs want closely aligned EAs together in leadership positions).
Solutions
Sass can’t be “stopped” now, and probably never could have been.
There are tangible things we can do that are robustly good:
Norms that involve frank communication about what people are doing when they get money or interest from EAs about these AI projects, this is good and interesting
A person whose explicit job is to check out what’s going on (and who is funded by an endowed fund for a period of time)
Note that both the above actions don’t need to have an adversarial character. Basically, it’s just leaning into the reality.
Conflicts of interest
Note that I have 4 conflicts of interests (basically, in the wrong way, that would normally cause a sane person not to write this):
I am funded by the relevant parties I am directly criticizing
I am a wannabe working on a for-profit language model thingy (so the very thing I am writing against)
I seek collaboration with people inside of these entities
I directly use several APIs and tools from the companies and even undocumented features and aid, which can be cut off
Finally, in theory, I know (non-EAs) people who want to invest in these “for profit” organizations, and writing this isn’t helping that deal flow
My collaborators read and cringe my forum comments
No wait, that’s actually six conflicts of interest.
So maybe “Next-level Next-level” will actually refer to the effects on my career, which is exciting.
✨✨Content✨✨
Alrighty, not sure how this contest works or what is going on, but I’ve got content to add in this thread!
My content might be different because I don’t see it as “red teaming”. I think “red teaming” is criticism that tends to be opposed to the issues. While wildly aggressive, I think that I accept the underlying goals and to try to improve them systemically. For example, by finishing with constructive, specific suggestions.
Also, I think my content is different because it’s not circling the same topics (like, I don’t see anyone else writing these ideas or solutions).
Finally, everything will be themed with Nirvana. Please play the following song (Sliver)
FYI I downvoted this and your other comment entirely because of the gratuitous pictures, videos etc.
Without directly confronting you (it’s wrong and not acceptable), and writing in an impartial voice:
These pictures and videos are a deliberate comment/critique on the hidden effects of current aesthetics and norms of discussion.
Here, your reaction is being intentionally provoked, because they are further illustrations of what the critique views as defective: the prioritization of aesthetics over content. (Any number of the points being made, about for-profit entities, alternative theories of change, seem monumental, even if half true, but “We’ll downvote them because of a picture”.)
Suspicion of Anthropic Silent Shadow (AKA “Sass” or “Sassy”)
“PREGISTRATION OF CRITICISM” (this isn’t the full criticism or solution but I don’t know when I will type it up):
The root issue of Sassy concerns is that a major realization of the EA interest in, and money entering AI, might be in the form of super high levels of funding to nascent entities. A major thread here is the straddling of these entities across the non-profit/for-profit boundary. Anthropic is one member of this class, but a number of other organizations are coming up.
(A brief sketch to give an impression of this level of funding is in this comment: “Next-level Next-level”).
The consequent effects of this funding are large and include casting a shadow on all recruiting and organization formation across EA. This is still true (maybe some are increased) if this is virtuous—if EAs are recruited for example, pulling EA talent into middle managers in AI orgs. There are positive effects too, such as high talent inflows. Importantly, most of these effects are silent.
As mentioned, a major thread is that the for-profit status of these organizations. Some complications of this status are important (but cerebral):
the “cost effectiveness” of these interventions could be infinitely positive
it introduces new theory of change of EA steering and leadership of relevant industries
a complete new theory of change related to TAI and takeoff, distinct from AI safety
However, the most immediate issue about for-profit status is venal. The slipperiness/porousness of straddling altruistic/profit projects, and the incentives related to this, might be bad and hard to manage. To be clear, I am worried about situations where for-profits wielding altruistic narratives, results in bad outcomes, much worse outcomes than just having a regular for-profit.
Regarding the amount of funding, it seems possible no situation like this exists in any non-profit ecosystem like EA in history (but we can probably find smaller instances where high quality non-profits are decapitated as their talent and processes flows to for-profits).
Note that Sassy criticism differs from, or is even opposed to most concerns about spending. For example, it views certain concerns about “conflict of interest” as irrelevant or even misguided and counterproductive ( EAs want closely aligned EAs together in leadership positions).
Solutions
Sass can’t be “stopped” now, and probably never could have been.
There are tangible things we can do that are robustly good:
Norms that involve frank communication about what people are doing when they get money or interest from EAs about these AI projects, this is good and interesting
A person whose explicit job is to check out what’s going on (and who is funded by an endowed fund for a period of time)
Note that both the above actions don’t need to have an adversarial character. Basically, it’s just leaning into the reality.
Conflicts of interest
Note that I have 4 conflicts of interests (basically, in the wrong way, that would normally cause a sane person not to write this):
I am funded by the relevant parties I am directly criticizing
I am a wannabe working on a for-profit language model thingy (so the very thing I am writing against)
I seek collaboration with people inside of these entities
I directly use several APIs and tools from the companies and even undocumented features and aid, which can be cut off
Finally, in theory, I know (non-EAs) people who want to invest in these “for profit” organizations, and writing this isn’t helping that deal flow
My collaborators read and cringe my forum comments
No wait, that’s actually six conflicts of interest.
So maybe “Next-level Next-level” will actually refer to the effects on my career, which is exciting.