The parent comment wasn’t really reply to you, and in some sense this neither is this comment (but it’s not intended to exclude or talk past you either).
Basically, I am observing the mini-activism being done by you, which is one instance of a broader class of what I see to be activity and related agendas trying to steer EA in a certain way. (Although with wildly different object level aims) what these people have in common is trying to steer EA using models, beliefs and patterns from left social movements.
My base model ideology is basically coastal liberal, so I’m not opposed to your end goal (modulo the issues of very different beliefs like timelines, values of sentient entities, and how actual execution/tractability/competence affects in end result). In fact, I suspect my goals are almost identical to yours.
It’s rather that I believe:
Many of these activists are lemons, and they won’t be able to execute on their goals, for a variety of reasons, not least of which is lacking understanding of the people and institutions they criticize and want to change.
The viability here is not even close. To calibrate, even if they were empowered by 1000%, they would probably still fail and result in tumult.
More substantively, I think a reasonable interpretation is that the “establishment in power”, as you might say, are perfectly aware of everything in this comment. For virtuous reasons, they won’t accept this activism, and have to react by shutting down a lot of progress for fear of dilution and tumult.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
In addition to this bycatch, the ideas and language these activists are using are resulting in a collision on material issues (e.g. “decentralization”) that they don’t actually know how to solve, but others do, but now it’s less tenable to voice the issues by the viable people.
It’s a worse form of crowding out, a sort of Gresham’s law, but it’s further counterproductive in that it’s increasing the pressure, empowering further bad activism, which is pathological.
This is counterproductive and blocking substantive progress, if I’m correct above, this really hurts many issues we care about. There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
I believe I’ve studied various movements/orgs/ideologies for this specific reason to understand and resolve these scenarios.
Instead of just saying the above (which I just now did), my comment was setting up the background, with (maybe not quite) object-level discussion of some social movements, sort of to interrogate how this would play out and how to think about addressing this.
I think this discussion would have to be several layers less removed from the object level in order to contain insight.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
Your explicit claim seems to be that fear of leftism / leftist activist practices are responsible for a slowing in the growth of EA, because institutions (namely CEA, I assume) are intentionally operating slower than they would if they did not have this fear. Your beliefs about magnitude of this slowdown are unclear. (do you think growth has been halved? tenthed?)
You seem to have strong priors that this would be true. I am not aware of any evidence that this phenomenon has occurred, and you have not pointed any out. I am aware of two community building initiatives over the past 5 years that have tried to get funding which were rejected, the EA Hotel and some other thing for training AI safety researchers, and the reasons for rejection were both specific and completely removed from anything you have discussed.
-- I chose the most contentful and specific part of your writing to react to IMO. I think your commentary would be helped by containing more content per word (above zero?)
I repeat that my comments are for onlookers or to lay out pieces of a broader argument, interrogate responses, and won’t be understood right now.
I encourage you to consider my claim that my goals are aligned to yours and then consider the more generous version of my views.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
To get to evidence about “leftist patterns of activism” concerns.
I don’t think you will hear this explicitly, for the very reasons you demonstrated in your reply. Explicitly excluding an ideology or laying naked the uselessness of activism looks and feels really bad and will be reacted to with immense hostility, even if it is not the direct issue and the real issue is distinct and motivated principledly.
Instead, the word here is “dilution”,’which is a major concern that you will hear explicitly publicly and even then it’s often voiced reluctantly. I think that most people who have this concern, fear activism or fear of strong underlying beliefs that aren’t aligned to EA.
I think there are very few examples of outright grifting or wholesale mimicry, and if examined most incidents involve views of the world EAs find misguided or counterproductive.
I have spent years near or around environmental and conservation movements and I appear dyed in the wool and would pass easily. Models from these experiences strongly indicate the problematic behaviour uses these patterns (which afflict these movements too) for example, because these people tell me outright.
(Not necessarily the people referenced above) there are several instances of people who are talented, good communicators, who have spent time in community building. These people have been rejected without explanation or remedial instructions. This is despite outlining specific plans to do work and communicate EA material. It seems like one should just fund dozens of these nascent people if the concern is that they are just OK. It doesn’t seem costly to hire an “OK” community builder but it seems extremely costly to risk funding people to replicate themselves and entrench behaviour that is focused on building constituencies on prosaic or non-EA causes, which I believe describes leftist activism well.
To be clear, as before, I’m not saying these rejected people are activists or misguided. The concern is bycatch from these filters.
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced? for you to change yours? I would engage in less “leftist micro activism”? I would decide DXE is probably net harmful instead of net positive? I would start believing CEA has been competently executing community building, against evidence? It cashes out to nothing except vague cultural / ideological association.
--
I agree that the concerns around “dilution” are evidence of the phenomenon you are discussing.
It remains unclear how impactful you believe this phenomenon has been in this case, which I think is important to convey.
Obviously, if somebody thought X was good, and that EA growth has been slowed because CEA hates X, this would not in itself form an argument for anything except the existence of conflict between CEA and likers of X.
-- TLDR:
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
Yes, this seems to follow the format of your entire thesis
Agrippa is engaging in, or promoting X (X is not particularly specificied in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster)
X or some subset of X is often involved in the toxic and incompetent culture of toxic and incompetent leftist activism
Toxic and incompetent leftist activism is bad (directly, and because CEA has intentionally funded less things for fear of it) so Agrippa should not engage in or promote X
At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
[ This comment is addressing Agrippa and not related to my other comments/beliefs about leftist activism ]
This reply is generous and thoughtful of you.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced?
X is not particularly specified in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster
Yes, you are exactly right in your thoughts here.
The truth is that I didn’t mean to write about you, Sapphire or DXE at all. As you noticed, there is in fact, limited or no object level issues related to you in my comment chain.
This is deliberate. I guess the reason why I picked you to start this chain, was for this reason. As you say:
[My behavior] does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
For you, that’s not a concern, because I’m not even talking about the issues you care about. I also think your issues have different emotional character and are more abstract (30M of funding to a defecting AI safety org).
Another motivation of mine that is more (less?) principled is that I believe you and Sapphire are picking an unreasonable fight with Michael St Jules, in this comment chain.
I think he was talking about specialization (“This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations”) and I thought you ignored this reasonable explanation, to try to pin down some excessive deference or favoring concentration of power (and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about. I also believe he is a “moderate” who doesn’t want concentration of thought or power.
This made me annoyed (it does sort of resemble some kinds of leftist activism) and I sort of trolled you with patterns I thought ”rhymes” with what you did.
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
This is just bad writing on my part. I meant “here” to mean, in EA or in EA discussion, and not referring to your behavior, strategy or comments.
>At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
This is really thoughtful, self aware and genuinely impressive. This is generous to think about and gives me too much credit.
I don’t agree with your analysis of the comment chain.
(and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about.
These assertions / assumptions aren’t true. He didn’t limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he’s done so, given that animal welfare is Sapphire’s dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I’m not sure how this reading is supported.
I thought you ignored this reasonable explanation
I am also not really sure how this reading is supported.
Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
There is really not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it’s in a reply setting and superficially resembles conversation.
The parent comment wasn’t really reply to you, and in some sense this neither is this comment (but it’s not intended to exclude or talk past you either).
Basically, I am observing the mini-activism being done by you, which is one instance of a broader class of what I see to be activity and related agendas trying to steer EA in a certain way. (Although with wildly different object level aims) what these people have in common is trying to steer EA using models, beliefs and patterns from left social movements.
My base model ideology is basically coastal liberal, so I’m not opposed to your end goal (modulo the issues of very different beliefs like timelines, values of sentient entities, and how actual execution/tractability/competence affects in end result). In fact, I suspect my goals are almost identical to yours.
It’s rather that I believe:
Many of these activists are lemons, and they won’t be able to execute on their goals, for a variety of reasons, not least of which is lacking understanding of the people and institutions they criticize and want to change.
The viability here is not even close. To calibrate, even if they were empowered by 1000%, they would probably still fail and result in tumult.
More substantively, I think a reasonable interpretation is that the “establishment in power”, as you might say, are perfectly aware of everything in this comment. For virtuous reasons, they won’t accept this activism, and have to react by shutting down a lot of progress for fear of dilution and tumult.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
In addition to this bycatch, the ideas and language these activists are using are resulting in a collision on material issues (e.g. “decentralization”) that they don’t actually know how to solve, but others do, but now it’s less tenable to voice the issues by the viable people.
It’s a worse form of crowding out, a sort of Gresham’s law, but it’s further counterproductive in that it’s increasing the pressure, empowering further bad activism, which is pathological.
This is counterproductive and blocking substantive progress, if I’m correct above, this really hurts many issues we care about. There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
I believe I’ve studied various movements/orgs/ideologies for this specific reason to understand and resolve these scenarios.
Instead of just saying the above (which I just now did), my comment was setting up the background, with (maybe not quite) object-level discussion of some social movements, sort of to interrogate how this would play out and how to think about addressing this.
I think this discussion would have to be several layers less removed from the object level in order to contain insight.
Your explicit claim seems to be that fear of leftism / leftist activist practices are responsible for a slowing in the growth of EA, because institutions (namely CEA, I assume) are intentionally operating slower than they would if they did not have this fear. Your beliefs about magnitude of this slowdown are unclear. (do you think growth has been halved? tenthed?)
You seem to have strong priors that this would be true. I am not aware of any evidence that this phenomenon has occurred, and you have not pointed any out. I am aware of two community building initiatives over the past 5 years that have tried to get funding which were rejected, the EA Hotel and some other thing for training AI safety researchers, and the reasons for rejection were both specific and completely removed from anything you have discussed.
--
I chose the most contentful and specific part of your writing to react to IMO. I think your commentary would be helped by containing more content per word (above zero?)
I repeat that my comments are for onlookers or to lay out pieces of a broader argument, interrogate responses, and won’t be understood right now.
I encourage you to consider my claim that my goals are aligned to yours and then consider the more generous version of my views.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
To get to evidence about “leftist patterns of activism” concerns.
I don’t think you will hear this explicitly, for the very reasons you demonstrated in your reply. Explicitly excluding an ideology or laying naked the uselessness of activism looks and feels really bad and will be reacted to with immense hostility, even if it is not the direct issue and the real issue is distinct and motivated principledly.
Instead, the word here is “dilution”,’which is a major concern that you will hear explicitly publicly and even then it’s often voiced reluctantly. I think that most people who have this concern, fear activism or fear of strong underlying beliefs that aren’t aligned to EA.
I think there are very few examples of outright grifting or wholesale mimicry, and if examined most incidents involve views of the world EAs find misguided or counterproductive.
I have spent years near or around environmental and conservation movements and I appear dyed in the wool and would pass easily. Models from these experiences strongly indicate the problematic behaviour uses these patterns (which afflict these movements too) for example, because these people tell me outright.
(Not necessarily the people referenced above) there are several instances of people who are talented, good communicators, who have spent time in community building. These people have been rejected without explanation or remedial instructions. This is despite outlining specific plans to do work and communicate EA material. It seems like one should just fund dozens of these nascent people if the concern is that they are just OK. It doesn’t seem costly to hire an “OK” community builder but it seems extremely costly to risk funding people to replicate themselves and entrench behaviour that is focused on building constituencies on prosaic or non-EA causes, which I believe describes leftist activism well.
To be clear, as before, I’m not saying these rejected people are activists or misguided. The concern is bycatch from these filters.
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced? for you to change yours? I would engage in less “leftist micro activism”? I would decide DXE is probably net harmful instead of net positive? I would start believing CEA has been competently executing community building, against evidence? It cashes out to nothing except vague cultural / ideological association.
--
I agree that the concerns around “dilution” are evidence of the phenomenon you are discussing.
It remains unclear how impactful you believe this phenomenon has been in this case, which I think is important to convey.
Obviously, if somebody thought X was good, and that EA growth has been slowed because CEA hates X, this would not in itself form an argument for anything except the existence of conflict between CEA and likers of X.
--
TLDR:
Yes, this seems to follow the format of your entire thesis
Agrippa is engaging in, or promoting X (X is not particularly specificied in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster)
X or some subset of X is often involved in the toxic and incompetent culture of toxic and incompetent leftist activism
Toxic and incompetent leftist activism is bad (directly, and because CEA has intentionally funded less things for fear of it) so Agrippa should not engage in or promote X
At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
[ This comment is addressing Agrippa and not related to my other comments/beliefs about leftist activism ]
This reply is generous and thoughtful of you.
Yes, you are exactly right in your thoughts here.
The truth is that I didn’t mean to write about you, Sapphire or DXE at all. As you noticed, there is in fact, limited or no object level issues related to you in my comment chain.
This is deliberate. I guess the reason why I picked you to start this chain, was for this reason. As you say:
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
For you, that’s not a concern, because I’m not even talking about the issues you care about. I also think your issues have different emotional character and are more abstract (30M of funding to a defecting AI safety org).
Another motivation of mine that is more (less?) principled is that I believe you and Sapphire are picking an unreasonable fight with Michael St Jules, in this comment chain.
I think he was talking about specialization (“This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations”) and I thought you ignored this reasonable explanation, to try to pin down some excessive deference or favoring concentration of power (and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about. I also believe he is a “moderate” who doesn’t want concentration of thought or power.
This made me annoyed (it does sort of resemble some kinds of leftist activism) and I sort of trolled you with patterns I thought ”rhymes” with what you did.
This is just bad writing on my part. I meant “here” to mean, in EA or in EA discussion, and not referring to your behavior, strategy or comments.
>At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
This is really thoughtful, self aware and genuinely impressive. This is generous to think about and gives me too much credit.
I appreciate the praise! Very cool.
I don’t agree with your analysis of the comment chain.
These assertions / assumptions aren’t true. He didn’t limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he’s done so, given that animal welfare is Sapphire’s dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I’m not sure how this reading is supported.
I am also not really sure how this reading is supported.
Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.
There is really not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it’s in a reply setting and superficially resembles conversation.