(I’m occupied with some things so I’ll just address this point and maybe come back to others later.)
It seems like the balance of opinion is very firmly anti-CC.
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of “EA politics”. If there’s a danger or trend of EA going in a CC direction, I should be among the last to know.
Until recently I have had very little interest in politics or even socializing. (I once wrote “And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.”) Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
I’m probably well within the top percentile of all EAs in terms of “cancel proofness”, because I have both an independent source of income and a non-zero amount of “intersectional currency” (e.g., I’m a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I don’t like to do talks/presentations, so there’s pretty much nothing about me that can be canceled.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. “preference falsification”). That seems to already be the situation today.
Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that they’re worried about EA developing/joining CC, and telling me what they’ve seen to make them worried, and saying that they can’t talk publicly about it.
I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly
I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. “preference falsification”). That seems to already be the situation today.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly don’t think about this issue because it’s not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.
If someone convinced me to get more pessimistic about “cancel culture” then I’d definitely think about it more. I’d be interested in concrete forecasts if you have any. For example, what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?
Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I’d want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like “Censure those who don’t censure others for not censuring others for problematic speech” and take that to its extreme, but the rest of the world will get along fine without them and it’s not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn’t look like they will win elections).
in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.)
I don’t feel that way. I think that “exclude people who talk openly about the conditions under which we exclude people” is a deeply pernicious norm and I’m happy to keep blithely violating it. If a group excludes me for doing so, then I think it’s a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)
I’m generally supportive of pro-speech arguments and efforts and I was glad to see the Harper’s letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.
I generally try to state my mind if I believe it’s important, don’t talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can’t discuss in public then my intention is to discuss them.
I would only intend to join an internet discussion about “cancellation” in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don’t recall all that was said, but I think a large part of my argument was that “jumping ship” or being forced off for ideological reasons was not “fine” when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I’m not sure if this changed Paul’s mind.
I’m not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn’t currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I’m happy enough to discuss, but I’m not sure there’s a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like “what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future?” I think I have a probability of perhaps 2-5% on “meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.”
I’m always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that “think about it more” is cost-effective for me, but I’m more skeptical of that (I expect they’d instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about “cancel culture” is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it’s more likely I should do something. In that case I do think it’s clear there is going to be a lot of damage, though again I think we differ a bit in that I’m more scared about the health of our institutions than people like me losing influence.
I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people’s minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don’t want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it’s an important problem that more people should work on. So instead of “and lots of people talk about it already” which seems to suggest that enough people are working on it already, something like “I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.”
Curious how things look from your perspective, or a third party perspective.
Why did it take someone like me to make the concern public?
I don’t think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. It’s not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was “The Importance of Truth-Oriented Discussions in EA”.
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don’t know either way.
I’ve also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they’d feel comfortable commenting about their views, though I can’t be sure.
I also have direct evidence in the form of EAs contacting me privately to say that they’re worried about EA developing/joining CC, and telling me what they’ve seen to make them worried, and saying that they can’t talk publicly about it.
I’ve gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.
My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...
...and because writing for an EA audience about any topic can be intimidating. I’ve had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had “literal anxiety dreams” when he wrote a post about longtermism.
As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. I’m not sure whether similar dynamics hold on Facebook/Twitter/Discord, though.
*****
I have seen incidents in the community that worried me. But I haven’t seen a pattern of such incidents; they’ve been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didn’t cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.
Also: I’d be glad to post something in the EA Polls group I created on Facebook.
Because answers are linked to Facebook accounts, some people might hide their views, but at least it’s a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I don’t know what wording you’d want around such a question.
the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public?
My guess is that your points explain a significant share of the effect, but I’d guess the following is also significant:
Expressing worries about how some external dynamic might affect the EA community isn’t often done on this Forum, perhaps because it’s less naturally “on topic” than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:
How does US immigration policy affect the ability of US-based EA orgs to hire talent?
How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Ventures’s capital might be in Facebook stocks?)
Both of these questions seem quite important and relevant, but I recall less discussion of those than I’d have at-first-glance expected based on their importance.
(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldn’t immediately find. But I think it’s somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)
(I’m occupied with some things so I’ll just address this point and maybe come back to others later.)
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of “EA politics”. If there’s a danger or trend of EA going in a CC direction, I should be among the last to know.
Until recently I have had very little interest in politics or even socializing. (I once wrote “And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.”) Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
I’m probably well within the top percentile of all EAs in terms of “cancel proofness”, because I have both an independent source of income and a non-zero amount of “intersectional currency” (e.g., I’m a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I don’t like to do talks/presentations, so there’s pretty much nothing about me that can be canceled.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. “preference falsification”). That seems to already be the situation today.
Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that they’re worried about EA developing/joining CC, and telling me what they’ve seen to make them worried, and saying that they can’t talk publicly about it.
I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly don’t think about this issue because it’s not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.
If someone convinced me to get more pessimistic about “cancel culture” then I’d definitely think about it more. I’d be interested in concrete forecasts if you have any. For example, what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?
Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I’d want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like “Censure those who don’t censure others for not censuring others for problematic speech” and take that to its extreme, but the rest of the world will get along fine without them and it’s not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn’t look like they will win elections).
I don’t feel that way. I think that “exclude people who talk openly about the conditions under which we exclude people” is a deeply pernicious norm and I’m happy to keep blithely violating it. If a group excludes me for doing so, then I think it’s a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)
I’m generally supportive of pro-speech arguments and efforts and I was glad to see the Harper’s letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.
I generally try to state my mind if I believe it’s important, don’t talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can’t discuss in public then my intention is to discuss them.
I would only intend to join an internet discussion about “cancellation” in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don’t recall all that was said, but I think a large part of my argument was that “jumping ship” or being forced off for ideological reasons was not “fine” when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I’m not sure if this changed Paul’s mind.
I’m not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn’t currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I’m happy enough to discuss, but I’m not sure there’s a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like “what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future?” I think I have a probability of perhaps 2-5% on “meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.”
I’m always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that “think about it more” is cost-effective for me, but I’m more skeptical of that (I expect they’d instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about “cancel culture” is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it’s more likely I should do something. In that case I do think it’s clear there is going to be a lot of damage, though again I think we differ a bit in that I’m more scared about the health of our institutions than people like me losing influence.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people’s minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don’t want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it’s an important problem that more people should work on. So instead of “and lots of people talk about it already” which seems to suggest that enough people are working on it already, something like “I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.”
Curious how things look from your perspective, or a third party perspective.
I don’t think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. It’s not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was “The Importance of Truth-Oriented Discussions in EA”.
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don’t know either way.
I’ve also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they’d feel comfortable commenting about their views, though I can’t be sure.
I’ve gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.
My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...
...and because writing for an EA audience about any topic can be intimidating. I’ve had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had “literal anxiety dreams” when he wrote a post about longtermism.
As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. I’m not sure whether similar dynamics hold on Facebook/Twitter/Discord, though.
*****
I have seen incidents in the community that worried me. But I haven’t seen a pattern of such incidents; they’ve been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didn’t cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.
Also: I’d be glad to post something in the EA Polls group I created on Facebook.
Because answers are linked to Facebook accounts, some people might hide their views, but at least it’s a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I don’t know what wording you’d want around such a question.
My guess is that your points explain a significant share of the effect, but I’d guess the following is also significant:
Expressing worries about how some external dynamic might affect the EA community isn’t often done on this Forum, perhaps because it’s less naturally “on topic” than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:
How does US immigration policy affect the ability of US-based EA orgs to hire talent?
How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Ventures’s capital might be in Facebook stocks?)
Both of these questions seem quite important and relevant, but I recall less discussion of those than I’d have at-first-glance expected based on their importance.
(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldn’t immediately find. But I think it’s somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)