One major concern I have with the actually-existing wholesale criticisms of EA is that they tend to reinforce a kind of moral complacency.
I agree this is common and it was what I most commonly confronted in college at Cornell. Oh, I should actually just be focused on living sustainably, not being racist, and participating in democracy, and this will be an optimally ethical life? Convenient if true!
I have several friends who are members of Direct Action Everywhere. I think DXE, as I’m exposed to it, does present a sort of alt-EA that you are asking about. I think that many DXE members could non hypocritically comment that EA is complacent / EAs are generally more complacent people than themselves.
While DXE is not focused on the general good (per se), anecdotally it seems like you can persuade DXE folks of extreme conclusions about the importance of AI safety, at least if they are also autistic.
I do think that you can interpret DXE as a general good, “beneficentrist” org, given that if you are not longetermism-pilled it is IMO it is reasonable to say that animal welfare is the highest moral priority and I think this is their actual belief. It’s an org for people to do the most important thing as they see it, not for them to just do a thing.
While DXE is not focused on the general good (per se), anecdotally it seems like you can persuade DXE folks of extreme conclusions about the importance of AI safety, at least if they are also autistic.
The problem is that you can also convince them about many many things.
Unfortunately, an issue with orgs that draw on ideological tones like “social movement” organizations, is almost constant churn and doubt on probably well understood ideas, like resource allocation, and internal institutions like long planning, that other orgs have solved long ago.
On the other hand, they constantly indulge things that seem objectively bad, like ignoring evidence against theories of change, and spending enormous time on politics and abstract objects that seem unproductive, and even overshadow EA’s excesses.
It may be prejudice, but being inside and seeing several organizations of various classes, this looks overdetermined for dysfunction once these orgs reach any scale.
Again, at risk of bias, it’s hard not to indulge my personal suspicion is that these intensely chaotic environments select for self-replication and media attention with the results:
Why they exist or at least we hear about these particular orgs is their ability to be aggressive
Their ability to focus and gather resources is limited
The aggressive orgs are selected for over more functional, slower orgs, crippling the ecosystem for strong social organizations.
The leaders and cultures arising from them are suspect culturally and “epistemically’
Sorry I am not sure I follow this post. I am not really commenting on how much DXE should grow, I’m not involved. However, if I was looking for those “moral optimizers” outside of EA that are surprisingly hard to find, I think that one place you can find them is DXE. It’s an existence proof—there are IMO sincere critics as the OP discusses.
If I were going to discuss whether DXE should grow, I would just try to list what they have accomplished and do some estimates of the costs. Heuristics about types of organization, the quality of the cultures involved, etc., would be of lower interest to me.
The parent comment wasn’t really reply to you, and in some sense this neither is this comment (but it’s not intended to exclude or talk past you either).
Basically, I am observing the mini-activism being done by you, which is one instance of a broader class of what I see to be activity and related agendas trying to steer EA in a certain way. (Although with wildly different object level aims) what these people have in common is trying to steer EA using models, beliefs and patterns from left social movements.
My base model ideology is basically coastal liberal, so I’m not opposed to your end goal (modulo the issues of very different beliefs like timelines, values of sentient entities, and how actual execution/tractability/competence affects in end result). In fact, I suspect my goals are almost identical to yours.
It’s rather that I believe:
Many of these activists are lemons, and they won’t be able to execute on their goals, for a variety of reasons, not least of which is lacking understanding of the people and institutions they criticize and want to change.
The viability here is not even close. To calibrate, even if they were empowered by 1000%, they would probably still fail and result in tumult.
More substantively, I think a reasonable interpretation is that the “establishment in power”, as you might say, are perfectly aware of everything in this comment. For virtuous reasons, they won’t accept this activism, and have to react by shutting down a lot of progress for fear of dilution and tumult.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
In addition to this bycatch, the ideas and language these activists are using are resulting in a collision on material issues (e.g. “decentralization”) that they don’t actually know how to solve, but others do, but now it’s less tenable to voice the issues by the viable people.
It’s a worse form of crowding out, a sort of Gresham’s law, but it’s further counterproductive in that it’s increasing the pressure, empowering further bad activism, which is pathological.
This is counterproductive and blocking substantive progress, if I’m correct above, this really hurts many issues we care about. There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
I believe I’ve studied various movements/orgs/ideologies for this specific reason to understand and resolve these scenarios.
Instead of just saying the above (which I just now did), my comment was setting up the background, with (maybe not quite) object-level discussion of some social movements, sort of to interrogate how this would play out and how to think about addressing this.
I think this discussion would have to be several layers less removed from the object level in order to contain insight.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
Your explicit claim seems to be that fear of leftism / leftist activist practices are responsible for a slowing in the growth of EA, because institutions (namely CEA, I assume) are intentionally operating slower than they would if they did not have this fear. Your beliefs about magnitude of this slowdown are unclear. (do you think growth has been halved? tenthed?)
You seem to have strong priors that this would be true. I am not aware of any evidence that this phenomenon has occurred, and you have not pointed any out. I am aware of two community building initiatives over the past 5 years that have tried to get funding which were rejected, the EA Hotel and some other thing for training AI safety researchers, and the reasons for rejection were both specific and completely removed from anything you have discussed.
-- I chose the most contentful and specific part of your writing to react to IMO. I think your commentary would be helped by containing more content per word (above zero?)
I repeat that my comments are for onlookers or to lay out pieces of a broader argument, interrogate responses, and won’t be understood right now.
I encourage you to consider my claim that my goals are aligned to yours and then consider the more generous version of my views.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
To get to evidence about “leftist patterns of activism” concerns.
I don’t think you will hear this explicitly, for the very reasons you demonstrated in your reply. Explicitly excluding an ideology or laying naked the uselessness of activism looks and feels really bad and will be reacted to with immense hostility, even if it is not the direct issue and the real issue is distinct and motivated principledly.
Instead, the word here is “dilution”,’which is a major concern that you will hear explicitly publicly and even then it’s often voiced reluctantly. I think that most people who have this concern, fear activism or fear of strong underlying beliefs that aren’t aligned to EA.
I think there are very few examples of outright grifting or wholesale mimicry, and if examined most incidents involve views of the world EAs find misguided or counterproductive.
I have spent years near or around environmental and conservation movements and I appear dyed in the wool and would pass easily. Models from these experiences strongly indicate the problematic behaviour uses these patterns (which afflict these movements too) for example, because these people tell me outright.
(Not necessarily the people referenced above) there are several instances of people who are talented, good communicators, who have spent time in community building. These people have been rejected without explanation or remedial instructions. This is despite outlining specific plans to do work and communicate EA material. It seems like one should just fund dozens of these nascent people if the concern is that they are just OK. It doesn’t seem costly to hire an “OK” community builder but it seems extremely costly to risk funding people to replicate themselves and entrench behaviour that is focused on building constituencies on prosaic or non-EA causes, which I believe describes leftist activism well.
To be clear, as before, I’m not saying these rejected people are activists or misguided. The concern is bycatch from these filters.
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced? for you to change yours? I would engage in less “leftist micro activism”? I would decide DXE is probably net harmful instead of net positive? I would start believing CEA has been competently executing community building, against evidence? It cashes out to nothing except vague cultural / ideological association.
--
I agree that the concerns around “dilution” are evidence of the phenomenon you are discussing.
It remains unclear how impactful you believe this phenomenon has been in this case, which I think is important to convey.
Obviously, if somebody thought X was good, and that EA growth has been slowed because CEA hates X, this would not in itself form an argument for anything except the existence of conflict between CEA and likers of X.
-- TLDR:
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
Yes, this seems to follow the format of your entire thesis
Agrippa is engaging in, or promoting X (X is not particularly specificied in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster)
X or some subset of X is often involved in the toxic and incompetent culture of toxic and incompetent leftist activism
Toxic and incompetent leftist activism is bad (directly, and because CEA has intentionally funded less things for fear of it) so Agrippa should not engage in or promote X
At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
[ This comment is addressing Agrippa and not related to my other comments/beliefs about leftist activism ]
This reply is generous and thoughtful of you.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced?
X is not particularly specified in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster
Yes, you are exactly right in your thoughts here.
The truth is that I didn’t mean to write about you, Sapphire or DXE at all. As you noticed, there is in fact, limited or no object level issues related to you in my comment chain.
This is deliberate. I guess the reason why I picked you to start this chain, was for this reason. As you say:
[My behavior] does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
For you, that’s not a concern, because I’m not even talking about the issues you care about. I also think your issues have different emotional character and are more abstract (30M of funding to a defecting AI safety org).
Another motivation of mine that is more (less?) principled is that I believe you and Sapphire are picking an unreasonable fight with Michael St Jules, in this comment chain.
I think he was talking about specialization (“This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations”) and I thought you ignored this reasonable explanation, to try to pin down some excessive deference or favoring concentration of power (and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about. I also believe he is a “moderate” who doesn’t want concentration of thought or power.
This made me annoyed (it does sort of resemble some kinds of leftist activism) and I sort of trolled you with patterns I thought ”rhymes” with what you did.
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
This is just bad writing on my part. I meant “here” to mean, in EA or in EA discussion, and not referring to your behavior, strategy or comments.
>At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
This is really thoughtful, self aware and genuinely impressive. This is generous to think about and gives me too much credit.
I don’t agree with your analysis of the comment chain.
(and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about.
These assertions / assumptions aren’t true. He didn’t limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he’s done so, given that animal welfare is Sapphire’s dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I’m not sure how this reading is supported.
I thought you ignored this reasonable explanation
I am also not really sure how this reading is supported.
Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
There is really not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it’s in a reply setting and superficially resembles conversation.
For some evidence at this, here is one of the founders of Extinction Rebellion (Robert Hallam, who got cancelled or something, I don’t know), wrote about infighting:
You say this to them, their eyes glaze over. They don’t understand what you mean. Because they have no life experience of revolution. They have spent their comfortable lives in offices in front of computers, on social media, they can’t conceive of any time that will be different than this. In practical terms, it means that they will never support anything which upsets those in power.
...
The radical left are those people who say great stuff, but are totally hopeless at doing anything about it. They call for climate justice, they are into ‘intersectionality’, they are pro identity politics. But the main thing is not what they say they want. The main thing is they have no idea about how to make it happen. In fact, everything they actually do stops change from happening. In actual fact, they are not radical at all. They are reactionary.
...
The biggest disaster of the last 30 years has been the adoption of horizontalist dogma. The notion that you should not have leaders, hierarchies or clear structures. Indeed, for many years, I believed much of this ideology. But practical experience shows it to be nonsense. This is because it imposes moral ideas on timeless truths about how people make decisions together. As such, it prevents movements from reaching a fraction of their political potential. T
Again, this is hard core, former leader of XR (who got cancelled himself at one point), making very basic fights over ideologies and primitive decisions like governance and management (and I think he got deposed or something because of it, but it’s just a big soup).
I’m sure there’s every permutation of this “left” vs “right” fighting going on constantly.
The point is that I’m skeptical that these orgs and cultures are a positive example for anything besides self-replication.
DXE Bay is not very decentralized. It’s run by the five people in ‘Core Leadership’. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership.
Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh ‘churn’ in EA’s leadership. I endorse the current leadership quite a bit and strongly prefer that several previous ‘Core’ members lost their elections.
note: I haven’t been very involved in DXE since I left California. Its really quite concentrated in the Bay.
The biggest disaster of the last 30 years has been the adoption of horizontalist dogma. The notion that you should not have leaders, hierarchies or clear structures.
I agree this is common and it was what I most commonly confronted in college at Cornell. Oh, I should actually just be focused on living sustainably, not being racist, and participating in democracy, and this will be an optimally ethical life? Convenient if true!
I have several friends who are members of Direct Action Everywhere. I think DXE, as I’m exposed to it, does present a sort of alt-EA that you are asking about. I think that many DXE members could non hypocritically comment that EA is complacent / EAs are generally more complacent people than themselves.
While DXE is not focused on the general good (per se), anecdotally it seems like you can persuade DXE folks of extreme conclusions about the importance of AI safety, at least if they are also autistic.
I do think that you can interpret DXE as a general good, “beneficentrist” org, given that if you are not longetermism-pilled it is IMO it is reasonable to say that animal welfare is the highest moral priority and I think this is their actual belief. It’s an org for people to do the most important thing as they see it, not for them to just do a thing.
RE: Complacency:
The problem is that you can also convince them about many many things.
Unfortunately, an issue with orgs that draw on ideological tones like “social movement” organizations, is almost constant churn and doubt on probably well understood ideas, like resource allocation, and internal institutions like long planning, that other orgs have solved long ago.
On the other hand, they constantly indulge things that seem objectively bad, like ignoring evidence against theories of change, and spending enormous time on politics and abstract objects that seem unproductive, and even overshadow EA’s excesses.
It may be prejudice, but being inside and seeing several organizations of various classes, this looks overdetermined for dysfunction once these orgs reach any scale.
Again, at risk of bias, it’s hard not to indulge my personal suspicion is that these intensely chaotic environments select for self-replication and media attention with the results:
Why they exist or at least we hear about these particular orgs is their ability to be aggressive
Their ability to focus and gather resources is limited
The aggressive orgs are selected for over more functional, slower orgs, crippling the ecosystem for strong social organizations.
The leaders and cultures arising from them are suspect culturally and “epistemically’
why do this footnote exist, I deleted it
why do this footnote also exist, I also deleted it
Sorry I am not sure I follow this post. I am not really commenting on how much DXE should grow, I’m not involved. However, if I was looking for those “moral optimizers” outside of EA that are surprisingly hard to find, I think that one place you can find them is DXE. It’s an existence proof—there are IMO sincere critics as the OP discusses.
If I were going to discuss whether DXE should grow, I would just try to list what they have accomplished and do some estimates of the costs. Heuristics about types of organization, the quality of the cultures involved, etc., would be of lower interest to me.
The parent comment wasn’t really reply to you, and in some sense this neither is this comment (but it’s not intended to exclude or talk past you either).
Basically, I am observing the mini-activism being done by you, which is one instance of a broader class of what I see to be activity and related agendas trying to steer EA in a certain way. (Although with wildly different object level aims) what these people have in common is trying to steer EA using models, beliefs and patterns from left social movements.
My base model ideology is basically coastal liberal, so I’m not opposed to your end goal (modulo the issues of very different beliefs like timelines, values of sentient entities, and how actual execution/tractability/competence affects in end result). In fact, I suspect my goals are almost identical to yours.
It’s rather that I believe:
Many of these activists are lemons, and they won’t be able to execute on their goals, for a variety of reasons, not least of which is lacking understanding of the people and institutions they criticize and want to change.
The viability here is not even close. To calibrate, even if they were empowered by 1000%, they would probably still fail and result in tumult.
More substantively, I think a reasonable interpretation is that the “establishment in power”, as you might say, are perfectly aware of everything in this comment. For virtuous reasons, they won’t accept this activism, and have to react by shutting down a lot of progress for fear of dilution and tumult.
I see the “bycatch” from this shutting down as obstructing many good people, because basically fast growth can’t be trusted.
In addition to this bycatch, the ideas and language these activists are using are resulting in a collision on material issues (e.g. “decentralization”) that they don’t actually know how to solve, but others do, but now it’s less tenable to voice the issues by the viable people.
It’s a worse form of crowding out, a sort of Gresham’s law, but it’s further counterproductive in that it’s increasing the pressure, empowering further bad activism, which is pathological.
This is counterproductive and blocking substantive progress, if I’m correct above, this really hurts many issues we care about. There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn’t a concern.
I believe I’ve studied various movements/orgs/ideologies for this specific reason to understand and resolve these scenarios.
Instead of just saying the above (which I just now did), my comment was setting up the background, with (maybe not quite) object-level discussion of some social movements, sort of to interrogate how this would play out and how to think about addressing this.
I think this discussion would have to be several layers less removed from the object level in order to contain insight.
Your explicit claim seems to be that fear of leftism / leftist activist practices are responsible for a slowing in the growth of EA, because institutions (namely CEA, I assume) are intentionally operating slower than they would if they did not have this fear. Your beliefs about magnitude of this slowdown are unclear. (do you think growth has been halved? tenthed?)
You seem to have strong priors that this would be true. I am not aware of any evidence that this phenomenon has occurred, and you have not pointed any out. I am aware of two community building initiatives over the past 5 years that have tried to get funding which were rejected, the EA Hotel and some other thing for training AI safety researchers, and the reasons for rejection were both specific and completely removed from anything you have discussed.
--
I chose the most contentful and specific part of your writing to react to IMO. I think your commentary would be helped by containing more content per word (above zero?)
I repeat that my comments are for onlookers or to lay out pieces of a broader argument, interrogate responses, and won’t be understood right now.
I encourage you to consider my claim that my goals are aligned to yours and then consider the more generous version of my views.
Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.
To get to evidence about “leftist patterns of activism” concerns.
I don’t think you will hear this explicitly, for the very reasons you demonstrated in your reply. Explicitly excluding an ideology or laying naked the uselessness of activism looks and feels really bad and will be reacted to with immense hostility, even if it is not the direct issue and the real issue is distinct and motivated principledly.
Instead, the word here is “dilution”,’which is a major concern that you will hear explicitly publicly and even then it’s often voiced reluctantly. I think that most people who have this concern, fear activism or fear of strong underlying beliefs that aren’t aligned to EA.
I think there are very few examples of outright grifting or wholesale mimicry, and if examined most incidents involve views of the world EAs find misguided or counterproductive.
I have spent years near or around environmental and conservation movements and I appear dyed in the wool and would pass easily. Models from these experiences strongly indicate the problematic behaviour uses these patterns (which afflict these movements too) for example, because these people tell me outright.
(Not necessarily the people referenced above) there are several instances of people who are talented, good communicators, who have spent time in community building. These people have been rejected without explanation or remedial instructions. This is despite outlining specific plans to do work and communicate EA material. It seems like one should just fund dozens of these nascent people if the concern is that they are just OK. It doesn’t seem costly to hire an “OK” community builder but it seems extremely costly to risk funding people to replicate themselves and entrench behaviour that is focused on building constituencies on prosaic or non-EA causes, which I believe describes leftist activism well.
To be clear, as before, I’m not saying these rejected people are activists or misguided. The concern is bycatch from these filters.
Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.
I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced? for you to change yours? I would engage in less “leftist micro activism”? I would decide DXE is probably net harmful instead of net positive? I would start believing CEA has been competently executing community building, against evidence? It cashes out to nothing except vague cultural / ideological association.
--
I agree that the concerns around “dilution” are evidence of the phenomenon you are discussing.
It remains unclear how impactful you believe this phenomenon has been in this case, which I think is important to convey.
Obviously, if somebody thought X was good, and that EA growth has been slowed because CEA hates X, this would not in itself form an argument for anything except the existence of conflict between CEA and likers of X.
--
TLDR:
Yes, this seems to follow the format of your entire thesis
Agrippa is engaging in, or promoting X (X is not particularly specificied in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster)
X or some subset of X is often involved in the toxic and incompetent culture of toxic and incompetent leftist activism
Toxic and incompetent leftist activism is bad (directly, and because CEA has intentionally funded less things for fear of it) so Agrippa should not engage in or promote X
At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
[ This comment is addressing Agrippa and not related to my other comments/beliefs about leftist activism ]
This reply is generous and thoughtful of you.
Yes, you are exactly right in your thoughts here.
The truth is that I didn’t mean to write about you, Sapphire or DXE at all. As you noticed, there is in fact, limited or no object level issues related to you in my comment chain.
This is deliberate. I guess the reason why I picked you to start this chain, was for this reason. As you say:
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
For you, that’s not a concern, because I’m not even talking about the issues you care about. I also think your issues have different emotional character and are more abstract (30M of funding to a defecting AI safety org).
Another motivation of mine that is more (less?) principled is that I believe you and Sapphire are picking an unreasonable fight with Michael St Jules, in this comment chain.
I think he was talking about specialization (“This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations”) and I thought you ignored this reasonable explanation, to try to pin down some excessive deference or favoring concentration of power (and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about. I also believe he is a “moderate” who doesn’t want concentration of thought or power.
This made me annoyed (it does sort of resemble some kinds of leftist activism) and I sort of trolled you with patterns I thought ”rhymes” with what you did.
This is just bad writing on my part. I meant “here” to mean, in EA or in EA discussion, and not referring to your behavior, strategy or comments.
>At the object level, X seems to be “giving DXE as an example of people who include credible moral optimizers that don’t align with EA”. If X includes other posts by me, perhaps it includes “claiming that CEA has not done a good job at community building or disbursing funds” (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and “whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work” (which also does not rest on anything I would consider even vaguely leftist coded).
This is really thoughtful, self aware and genuinely impressive. This is generous to think about and gives me too much credit.
I appreciate the praise! Very cool.
I don’t agree with your analysis of the comment chain.
These assertions / assumptions aren’t true. He didn’t limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he’s done so, given that animal welfare is Sapphire’s dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I’m not sure how this reading is supported.
I am also not really sure how this reading is supported.
Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.
There is really not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it’s in a reply setting and superficially resembles conversation.
For some evidence at this, here is one of the founders of Extinction Rebellion (Robert Hallam, who got cancelled or something, I don’t know), wrote about infighting:
...
...
Again, this is hard core, former leader of XR (who got cancelled himself at one point), making very basic fights over ideologies and primitive decisions like governance and management (and I think he got deposed or something because of it, but it’s just a big soup).
I’m sure there’s every permutation of this “left” vs “right” fighting going on constantly.
The point is that I’m skeptical that these orgs and cultures are a positive example for anything besides self-replication.
DXE Bay is not very decentralized. It’s run by the five people in ‘Core Leadership’. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership.
Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh ‘churn’ in EA’s leadership. I endorse the current leadership quite a bit and strongly prefer that several previous ‘Core’ members lost their elections.
note: I haven’t been very involved in DXE since I left California. Its really quite concentrated in the Bay.
I think this is a fairly common/prominent concern in left circles e.g. The Tyranny of Structurelessness.
I wouldn’t really consider DXE particularly horizontalist? Paging @sapphire
I’m also not sure in what sense these quotes would be evidence of anything about DXE