This is a bit orthogonal to your question but, imo, part of the same conversation.
A take I have on social capital/PR concerns in EA is that sometimes when people say they are worried about ‘social capital’ or ‘PR’, it means ‘I feel uneasy about this thing, and it’s something to do with being observed by non-EAs/the world at large, and I can’t quite articulate it in terms of utilitarian consequences’.
And how much weight we should give to those feelings sort of depends on the situation:
(1) In some situations we should ignore it completely, arguably. (E.g., we should ignore the fact that lots of people think animal advocacy is weird/bad, probably, and just keep on doing animal advocacy).
(2) In other situations, we should pay attention to them but only inasmuch as they tell us that we need to be cautious in how we interact with non-EAs. We might want to be circumspect with how we talk about certain things, or what we do, but deep down we recognize that we are right and outsiders are wrong to judge us. (Maybe doing stuff related to AI risk falls in this category)
(3) In yet other situations, however, I suggest that we should take these feelings of unease as a sign that something is wrong with what we are doing or what we are arguing. We are uneasy about what people will think because we recognize that people of other ideological commitments also have wisdom, and are also right about things, and we worry that this might be one of those times (but we have not yet articulated that in straightforward “this seems to be negative EV on the object level, separate from PR concerns” terms).
I think lots of people think we are currently in (2): that epistemics would dictate that we can discuss whatever we like, but some things look so bad that they’ll damage the movement. I, however, am firmly in (3) - I’m uncomfortable about the letter and the discussions because I think that the average progressive person’s instinct of ‘this whole thing stinks and is bad’...probably has a point?
To further illustrate this, a metaphor: imagine a person who, when they do certain things, often worries about what their parents would think. How much should they care? Well, it depends on what they’re doing and why their parents disapprove of it. If the worry is ‘I’m in a same-sex relationship, and I overall think this is fine, but my homophobic parents would disapprove’ - probably you should ignore the concern, beyond being compassionate to the worried part of you. But if the worry is ‘I’m working for a really harmful company, and I’ve kinda justified this to myself, but I feel ashamed when I think of what my parents would think’ - that might be something you should interrogate.
Maybe another way to frame this is ‘have a Chesterton’s fence around ideologies you dismiss’ - like, you can only decide that you don’t care about ideologies once you’ve fully understood them. I think in many cases, EAs are dismissing criticisms without fully understanding why people are making them, which leads them to see the whole situation as ‘oh those other people are just low decouplers/worried about PR’, rather than ‘they take the critique somewhat seriously but for reasons that aren’t easily articulable within standard EA ethical frameworks’.
This is a bit orthogonal to your question but, imo, part of the same conversation.
A take I have on social capital/PR concerns in EA is that sometimes when people say they are worried about ‘social capital’ or ‘PR’, it means ‘I feel uneasy about this thing, and it’s something to do with being observed by non-EAs/the world at large, and I can’t quite articulate it in terms of utilitarian consequences’.
And how much weight we should give to those feelings sort of depends on the situation:
(1) In some situations we should ignore it completely, arguably. (E.g., we should ignore the fact that lots of people think animal advocacy is weird/bad, probably, and just keep on doing animal advocacy).
(2) In other situations, we should pay attention to them but only inasmuch as they tell us that we need to be cautious in how we interact with non-EAs. We might want to be circumspect with how we talk about certain things, or what we do, but deep down we recognize that we are right and outsiders are wrong to judge us. (Maybe doing stuff related to AI risk falls in this category)
(3) In yet other situations, however, I suggest that we should take these feelings of unease as a sign that something is wrong with what we are doing or what we are arguing. We are uneasy about what people will think because we recognize that people of other ideological commitments also have wisdom, and are also right about things, and we worry that this might be one of those times (but we have not yet articulated that in straightforward “this seems to be negative EV on the object level, separate from PR concerns” terms).
I think lots of people think we are currently in (2): that epistemics would dictate that we can discuss whatever we like, but some things look so bad that they’ll damage the movement. I, however, am firmly in (3) - I’m uncomfortable about the letter and the discussions because I think that the average progressive person’s instinct of ‘this whole thing stinks and is bad’...probably has a point?
To further illustrate this, a metaphor: imagine a person who, when they do certain things, often worries about what their parents would think. How much should they care? Well, it depends on what they’re doing and why their parents disapprove of it. If the worry is ‘I’m in a same-sex relationship, and I overall think this is fine, but my homophobic parents would disapprove’ - probably you should ignore the concern, beyond being compassionate to the worried part of you. But if the worry is ‘I’m working for a really harmful company, and I’ve kinda justified this to myself, but I feel ashamed when I think of what my parents would think’ - that might be something you should interrogate.
Maybe another way to frame this is ‘have a Chesterton’s fence around ideologies you dismiss’ - like, you can only decide that you don’t care about ideologies once you’ve fully understood them. I think in many cases, EAs are dismissing criticisms without fully understanding why people are making them, which leads them to see the whole situation as ‘oh those other people are just low decouplers/worried about PR’, rather than ‘they take the critique somewhat seriously but for reasons that aren’t easily articulable within standard EA ethical frameworks’.