To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don’t recall all that was said, but I think a large part of my argument was that “jumping ship” or being forced off for ideological reasons was not “fine” when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I’m not sure if this changed Paul’s mind.
I’m not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn’t currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I’m happy enough to discuss, but I’m not sure there’s a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like “what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future?” I think I have a probability of perhaps 2-5% on “meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.”
I’m always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that “think about it more” is cost-effective for me, but I’m more skeptical of that (I expect they’d instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about “cancel culture” is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it’s more likely I should do something. In that case I do think it’s clear there is going to be a lot of damage, though again I think we differ a bit in that I’m more scared about the health of our institutions than people like me losing influence.
I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people’s minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don’t want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it’s an important problem that more people should work on. So instead of “and lots of people talk about it already” which seems to suggest that enough people are working on it already, something like “I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.”
Curious how things look from your perspective, or a third party perspective.
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don’t recall all that was said, but I think a large part of my argument was that “jumping ship” or being forced off for ideological reasons was not “fine” when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I’m not sure if this changed Paul’s mind.
I’m not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn’t currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I’m happy enough to discuss, but I’m not sure there’s a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like “what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future?” I think I have a probability of perhaps 2-5% on “meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.”
I’m always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that “think about it more” is cost-effective for me, but I’m more skeptical of that (I expect they’d instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about “cancel culture” is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it’s more likely I should do something. In that case I do think it’s clear there is going to be a lot of damage, though again I think we differ a bit in that I’m more scared about the health of our institutions than people like me losing influence.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people’s minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don’t want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it’s an important problem that more people should work on. So instead of “and lots of people talk about it already” which seems to suggest that enough people are working on it already, something like “I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.”
Curious how things look from your perspective, or a third party perspective.