Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think itās very unlikely that āfull-scale cancel cultureā (Iāll use āCCā from here) evolves within EA.
Some elements of my doubt:
Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
Obviously, none of these are perfect analogies, but I think thereās a noteworthy pattern here.
The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
EA has a strong British influence (rather than being wholly rooted in the United States) and solid bases in other cultures; this makes us a bit less vulnerable to shifts in one nationās culture. Of course, the entire Western world is moving in a ācancel cultureā direction to some degree, so this isnāt complete protection, but it still seems like a protective factor.
Iāve also been impressed by recent EA work Iāve seen come out of Brazil, Singapore, and China, which seem much less likely to be swept by parallel movements than Germany or Britain.
Your comments on this issue include the most upvoted comments on my post, on Cullenās post, and on āRacial Demographics at Longtermist Organizationsā. It seems like the balance of opinion is very firmly anti-CC. If I began to see downvoting brigades on those types of comments, I would become much more concerned.
Compared to all of the above, a single local groupās decision seems minor.
But Iām sure there are other reasons to worry. If anyone sees this and wants to create a counter-list (āelements of concernā?), Iād be very interested to read it.
(Iām occupied with some things so Iāll just address this point and maybe come back to others later.)
It seems like the balance of opinion is very firmly anti-CC.
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of āEA politicsā. If thereās a danger or trend of EA going in a CC direction, I should be among the last to know.
Until recently I have had very little interest in politics or even socializing. (I once wrote āAnd while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.ā) Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
Iām probably well within the top percentile of all EAs in terms of ācancel proofnessā, because I have both an independent source of income and a non-zero amount of āintersectional currencyā (e.g., Iām a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I donāt like to do talks/āpresentations, so thereās pretty much nothing about me that can be canceled.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesnāt exist. (Maybe they wonāt be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. āpreference falsificationā). That seems to already be the situation today.
Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that theyāre worried about EA developing/ājoining CC, and telling me what theyāve seen to make them worried, and saying that they canāt talk publicly about it.
I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly
I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesnāt exist. (Maybe they wonāt be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. āpreference falsificationā). That seems to already be the situation today.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly donāt think about this issue because itās not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.
If someone convinced me to get more pessimistic about ācancel cultureā then Iād definitely think about it more. Iād be interested in concrete forecasts if you have any. For example, whatās the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?
Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who Iād want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like āCensure those who donāt censure others for not censuring others for problematic speechā and take that to its extreme, but the rest of the world will get along fine without them and itās not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesnāt look like they will win elections).
in CC you can get canceled for talking about CC, except of course to claim that it doesnāt exist. (Maybe they wonāt be canceled right away, but it will make them targets when cancel culture gets stronger in the future.)
I donāt feel that way. I think that āexclude people who talk openly about the conditions under which we exclude peopleā is a deeply pernicious norm and Iām happy to keep blithely violating it. If a group excludes me for doing so, then I think itās a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)
Iām generally supportive of pro-speech arguments and efforts and I was glad to see the Harperās letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.
I generally try to state my mind if I believe itās important, donāt talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I canāt discuss in public then my intention is to discuss them.
I would only intend to join an internet discussion about ācancellationā in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I donāt recall all that was said, but I think a large part of my argument was that ājumping shipā or being forced off for ideological reasons was not āfineā when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. Iām not sure if this changed Paulās mind.
Iām not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesnāt currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue /ā it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which Iām happy enough to discuss, but Iām not sure thereās a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like āwhatās the probability that making pro-speech comments would itself be a significant political liability at some point in the future?ā I think I have a probability of perhaps 2-5% on āmeta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.ā
Iām always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that āthink about it moreā is cost-effective for me, but Iām more skeptical of that (I expect theyād instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didnāt intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about ācancel cultureā is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and itās more likely I should do something. In that case I do think itās clear there is going to be a lot of damage, though again I think we differ a bit in that Iām more scared about the health of our institutions than people like me losing influence.
I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didnāt intend for the grandparent to be pushing against that.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other peopleās minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you donāt want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think itās an important problem that more people should work on. So instead of āand lots of people talk about it alreadyā which seems to suggest that enough people are working on it already, something like āI think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.ā
Curious how things look from your perspective, or a third party perspective.
Why did it take someone like me to make the concern public?
I donāt think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. Itās not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was āThe Importance of Truth-Oriented Discussions in EAā.
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I donāt know either way.
Iāve also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect theyād feel comfortable commenting about their views, though I canāt be sure.
I also have direct evidence in the form of EAs contacting me privately to say that theyāre worried about EA developing/ājoining CC, and telling me what theyāve seen to make them worried, and saying that they canāt talk publicly about it.
Iāve gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.
My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...
...and because writing for an EA audience about any topic can be intimidating. Iāve had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had āliteral anxiety dreamsā when he wrote a post about longtermism.
As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. Iām not sure whether similar dynamics hold on Facebook/āTwitter/āDiscord, though.
*****
I have seen incidents in the community that worried me. But I havenāt seen a pattern of such incidents; theyāve been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didnāt cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.
Also: Iād be glad to post something in the EA Polls group I created on Facebook.
Because answers are linked to Facebook accounts, some people might hide their views, but at least itās a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I donāt know what wording youād want around such a question.
the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public?
My guess is that your points explain a significant share of the effect, but Iād guess the following is also significant:
Expressing worries about how some external dynamic might affect the EA community isnāt often done on this Forum, perhaps because itās less naturally āon topicā than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:
How does US immigration policy affect the ability of US-based EA orgs to hire talent?
How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Venturesās capital might be in Facebook stocks?)
Both of these questions seem quite important and relevant, but I recall less discussion of those than Iād have at-first-glance expected based on their importance.
(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldnāt immediately find. But I think itās somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)
Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think itās very unlikely that āfull-scale cancel cultureā (Iāll use āCCā from here) evolves within EA.
Some elements of my doubt:
Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
CEA actively shrank the size of EA Global because they thought it would improve the quality of the event.
80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
Obviously, none of these are perfect analogies, but I think thereās a noteworthy pattern here.
The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
EA has a strong British influence (rather than being wholly rooted in the United States) and solid bases in other cultures; this makes us a bit less vulnerable to shifts in one nationās culture. Of course, the entire Western world is moving in a ācancel cultureā direction to some degree, so this isnāt complete protection, but it still seems like a protective factor.
Iāve also been impressed by recent EA work Iāve seen come out of Brazil, Singapore, and China, which seem much less likely to be swept by parallel movements than Germany or Britain.
Your comments on this issue include the most upvoted comments on my post, on Cullenās post, and on āRacial Demographics at Longtermist Organizationsā. It seems like the balance of opinion is very firmly anti-CC. If I began to see downvoting brigades on those types of comments, I would become much more concerned.
Compared to all of the above, a single local groupās decision seems minor.
But Iām sure there are other reasons to worry. If anyone sees this and wants to create a counter-list (āelements of concernā?), Iād be very interested to read it.
(Iām occupied with some things so Iāll just address this point and maybe come back to others later.)
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of āEA politicsā. If thereās a danger or trend of EA going in a CC direction, I should be among the last to know.
Until recently I have had very little interest in politics or even socializing. (I once wrote āAnd while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.ā) Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
Iām probably well within the top percentile of all EAs in terms of ācancel proofnessā, because I have both an independent source of income and a non-zero amount of āintersectional currencyā (e.g., Iām a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I donāt like to do talks/āpresentations, so thereās pretty much nothing about me that can be canceled.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesnāt exist. (Maybe they wonāt be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. āpreference falsificationā). That seems to already be the situation today.
Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that theyāre worried about EA developing/ājoining CC, and telling me what theyāve seen to make them worried, and saying that they canāt talk publicly about it.
I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly donāt think about this issue because itās not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.
If someone convinced me to get more pessimistic about ācancel cultureā then Iād definitely think about it more. Iād be interested in concrete forecasts if you have any. For example, whatās the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?
Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who Iād want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like āCensure those who donāt censure others for not censuring others for problematic speechā and take that to its extreme, but the rest of the world will get along fine without them and itās not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesnāt look like they will win elections).
I donāt feel that way. I think that āexclude people who talk openly about the conditions under which we exclude peopleā is a deeply pernicious norm and Iām happy to keep blithely violating it. If a group excludes me for doing so, then I think itās a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)
Iām generally supportive of pro-speech arguments and efforts and I was glad to see the Harperās letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.
I generally try to state my mind if I believe itās important, donāt talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I canāt discuss in public then my intention is to discuss them.
I would only intend to join an internet discussion about ācancellationā in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I donāt recall all that was said, but I think a large part of my argument was that ājumping shipā or being forced off for ideological reasons was not āfineā when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. Iām not sure if this changed Paulās mind.
Iām not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesnāt currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue /ā it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which Iām happy enough to discuss, but Iām not sure thereās a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like āwhatās the probability that making pro-speech comments would itself be a significant political liability at some point in the future?ā I think I have a probability of perhaps 2-5% on āmeta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.ā
Iām always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that āthink about it moreā is cost-effective for me, but Iām more skeptical of that (I expect theyād instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didnāt intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about ācancel cultureā is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and itās more likely I should do something. In that case I do think itās clear there is going to be a lot of damage, though again I think we differ a bit in that Iām more scared about the health of our institutions than people like me losing influence.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other peopleās minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you donāt want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think itās an important problem that more people should work on. So instead of āand lots of people talk about it alreadyā which seems to suggest that enough people are working on it already, something like āI think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.ā
Curious how things look from your perspective, or a third party perspective.
I donāt think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. Itās not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was āThe Importance of Truth-Oriented Discussions in EAā.
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I donāt know either way.
Iāve also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect theyād feel comfortable commenting about their views, though I canāt be sure.
Iāve gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.
My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...
...and because writing for an EA audience about any topic can be intimidating. Iāve had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had āliteral anxiety dreamsā when he wrote a post about longtermism.
As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. Iām not sure whether similar dynamics hold on Facebook/āTwitter/āDiscord, though.
*****
I have seen incidents in the community that worried me. But I havenāt seen a pattern of such incidents; theyāve been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didnāt cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.
Also: Iād be glad to post something in the EA Polls group I created on Facebook.
Because answers are linked to Facebook accounts, some people might hide their views, but at least itās a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I donāt know what wording youād want around such a question.
My guess is that your points explain a significant share of the effect, but Iād guess the following is also significant:
Expressing worries about how some external dynamic might affect the EA community isnāt often done on this Forum, perhaps because itās less naturally āon topicā than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:
How does US immigration policy affect the ability of US-based EA orgs to hire talent?
How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Venturesās capital might be in Facebook stocks?)
Both of these questions seem quite important and relevant, but I recall less discussion of those than Iād have at-first-glance expected based on their importance.
(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldnāt immediately find. But I think itās somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)