As you are someone who falls into the “prioritize epistemics” camp, I would have preferred for you to steelman the “camp” you don’t fall in, and frame the “other side” in terms of what they prioritize (like you do in the title), rather than characterizing them as compromising epistemics.
This is not intended as a personal attack-I would make a similar comment to someone who asked a question from “the other side” (e.g.: “On one side, you have the people who prioritize making sure EA is not racist. On the other, you have the people who worried that if we don’t compromise at all, we’ll simply end up following what’s acceptable instead of what’s true”.)
In general, I think this kind of framing risks encouraging tribal commentary that assumes the worst in each other, and is unconstructive to shared goals. Here is how I would have asked a similar question:
“It seems like there is a divide on the forum around whether Nick Bostrom/CEA’s statements were appropriate. (Insert some quotes of comments that reflect this divide). What do people think are the cruxes that are driving differences in opinion? How do we navigate these differences and work out when we should prioritize one value (or sets of values) over others?”
I don’t doubt that there are better ways of characterising the situation.
However, I do think there is a divide between those that prioritise epistemics and those that prioritise optics/social capital, when push comes to shove.
I did try to describe the two sides fairly, ie. “saving us from losing our ability to navigate” vs. “saving us from losing our influence”. I mean both of these sound fairly important/compelling and plausibly could cause the EA movement to fail to achieve its objectives. And, as someone who did a tiny bit of debating back in the day, I don’t think I’d have difficulty taking either side in a debate.
So I would be surprised if my framing were quite as biased as you seem to indicate.
That said, I’ve gone back and attempted to reword the post in the hope that it reduces some of these concerns.
Yeah, I’m not saying there is zero divide. I’m not even saying you shouldn’t characterize both sides. But if you do, it would be helpful to find ways of characterizing both sides with similarly positively-coded framing. Like, frame this post in a way where you would pass an ideological turing test, i.e. people can’t tell which “camp” you’re in.
The “not racist” vs “happy to compromise on racism” was my way of trying to illustrate how your “good epistemics” vs “happy to compromise on epistemics” wasn’t balanced, but I could have been more explicit in this.
Saying one side prioritizes good epistemics and the other side is happy to compromise epistemics seems to clearly favor the first side.
Saying one side prioritizes good epistemics and the other side prioritizes “good optics” or “social capital” (to a weaker extent) seems to similarly weakly favor the first side. For example, I don’t think it’s a charitable interpretation of the “other side” that they’re primarily doing this for reasons of good optics.
I also think asking the question more generally is useful.
For example, my sense is also that your “camp” still strongly values social capital, just a different kind of social capital. In your response to CEA’s PR statement, you say “It is much more important to maintain trust with your community than to worry about what outsiders think”. Presumably trust within the community is also a form of social capital. By your statement alone you could say one camp prioritizes maintaining social capital within the movement, and one camp prioritizes building social capital outside of the movement. I’m not saying this is a fair characterization of the differences in groups, but there might not be a “core divide”.
It might be a difference in empirical views (e.g. both groups believe social capital and good epistemics are important. Both groups believe in good epistemics and social capital for predominantly consequentialist reasons. There is no “core” difference. But one group think that empirically, the downstream effects of breaking a truth-optimizing norm is what leads to the worst outcome. Perhaps this might be associated with a theory of change of a closely knit, high-trust, high-leverage group of people. It is less important what the rest of the world thinks, because it’s more important that this high-leverage group of people can drown out the noise and optimize for truth, so they can make the difficult decisions that are likely to be missed by simply following what’s popular and accepted. The other group thinks that the downstream effects of alienating large swaths of an entire race and people who care about racial equality (and the norm that this is an acceptable tradeoff) is what leads to the worst outcome. Perhaps this might be associated with a view that buy-in from outsiders is critical to scaling impact.
But people might have different values or different empirical views or different theories of change, or some combination. That’s why I am more reluctant to frame this issue in such clear demarcated ways (“central divide”, “on one side”, “this camp”), when it’s not settled that this is the cause of the differences in opinion.
Hmm… the valence of the word “compromise” is complex. It’s negative in “compromising integrity”, but “being unwilling to compromise” is often used to mean that someone is being unreasonable. However, I suppose I should have predicted this wording wouldn’t have been to people’s liking. Hopefully my new wording of “trade-offs” is more to your liking.
As you are someone who falls into the “prioritize epistemics” camp, I would have preferred for you to steelman the “camp” you don’t fall in, and frame the “other side” in terms of what they prioritize (like you do in the title), rather than characterizing them as compromising epistemics.
This is not intended as a personal attack-I would make a similar comment to someone who asked a question from “the other side” (e.g.: “On one side, you have the people who prioritize making sure EA is not racist. On the other, you have the people who worried that if we don’t compromise at all, we’ll simply end up following what’s acceptable instead of what’s true”.)
In general, I think this kind of framing risks encouraging tribal commentary that assumes the worst in each other, and is unconstructive to shared goals. Here is how I would have asked a similar question:
“It seems like there is a divide on the forum around whether Nick Bostrom/CEA’s statements were appropriate. (Insert some quotes of comments that reflect this divide). What do people think are the cruxes that are driving differences in opinion? How do we navigate these differences and work out when we should prioritize one value (or sets of values) over others?”
I don’t doubt that there are better ways of characterising the situation.
However, I do think there is a divide between those that prioritise epistemics and those that prioritise optics/social capital, when push comes to shove.
I did try to describe the two sides fairly, ie. “saving us from losing our ability to navigate” vs. “saving us from losing our influence”. I mean both of these sound fairly important/compelling and plausibly could cause the EA movement to fail to achieve its objectives. And, as someone who did a tiny bit of debating back in the day, I don’t think I’d have difficulty taking either side in a debate.
So I would be surprised if my framing were quite as biased as you seem to indicate.
That said, I’ve gone back and attempted to reword the post in the hope that it reduces some of these concerns.
Yeah, I’m not saying there is zero divide. I’m not even saying you shouldn’t characterize both sides. But if you do, it would be helpful to find ways of characterizing both sides with similarly positively-coded framing. Like, frame this post in a way where you would pass an ideological turing test, i.e. people can’t tell which “camp” you’re in.
The “not racist” vs “happy to compromise on racism” was my way of trying to illustrate how your “good epistemics” vs “happy to compromise on epistemics” wasn’t balanced, but I could have been more explicit in this.
Saying one side prioritizes good epistemics and the other side is happy to compromise epistemics seems to clearly favor the first side.
Saying one side prioritizes good epistemics and the other side prioritizes “good optics” or “social capital” (to a weaker extent) seems to similarly weakly favor the first side. For example, I don’t think it’s a charitable interpretation of the “other side” that they’re primarily doing this for reasons of good optics.
I also think asking the question more generally is useful.
For example, my sense is also that your “camp” still strongly values social capital, just a different kind of social capital. In your response to CEA’s PR statement, you say “It is much more important to maintain trust with your community than to worry about what outsiders think”. Presumably trust within the community is also a form of social capital. By your statement alone you could say one camp prioritizes maintaining social capital within the movement, and one camp prioritizes building social capital outside of the movement. I’m not saying this is a fair characterization of the differences in groups, but there might not be a “core divide”.
It might be a difference in empirical views (e.g. both groups believe social capital and good epistemics are important. Both groups believe in good epistemics and social capital for predominantly consequentialist reasons. There is no “core” difference. But one group think that empirically, the downstream effects of breaking a truth-optimizing norm is what leads to the worst outcome. Perhaps this might be associated with a theory of change of a closely knit, high-trust, high-leverage group of people. It is less important what the rest of the world thinks, because it’s more important that this high-leverage group of people can drown out the noise and optimize for truth, so they can make the difficult decisions that are likely to be missed by simply following what’s popular and accepted. The other group thinks that the downstream effects of alienating large swaths of an entire race and people who care about racial equality (and the norm that this is an acceptable tradeoff) is what leads to the worst outcome. Perhaps this might be associated with a view that buy-in from outsiders is critical to scaling impact.
But people might have different values or different empirical views or different theories of change, or some combination. That’s why I am more reluctant to frame this issue in such clear demarcated ways (“central divide”, “on one side”, “this camp”), when it’s not settled that this is the cause of the differences in opinion.
Hmm… the valence of the word “compromise” is complex. It’s negative in “compromising integrity”, but “being unwilling to compromise” is often used to mean that someone is being unreasonable. However, I suppose I should have predicted this wording wouldn’t have been to people’s liking. Hopefully my new wording of “trade-offs” is more to your liking.