as someone who is likely in the “declining epistemics would be bad” camp, I will try to write this reply while mindfully attempting to be better at epistemics than I usually am.
Let’s start with some points where you hit on something true:
However I think the way this topic is being discussed and leveraged in arguments is toxic to fostering trust in our community
I agree that talk about bad epistemics can come across as being unwelcoming to newcomers and considering them stupid. Coupled with the elitist vibe many people get from EA, this is not great.
I also agree that many people will read the position you describe as implying “I am smarter than you”, and people making that argument should be mindful of this, and think about how to avoid giving this impression.
You cite as one of the implied assumptions:
Those with high quality epistemics usually agree on similar things
I think it is indeed a danger that “quality epistemics” is sometimes used as a shortcut to defend things mindlessly. In an EA context, I often disagreed with arguments that defer strongly to experts in EA orgs. These arguments vaguely seem to neglect that these experts might have systematic biases qua working in those orgs.
Personally, I probably sometimes use “bad epistemics” as a cached thought internally when encountering a position for which I have mostly seen arguments that I found unconvincing in the past.
Now for the parts I disagree with:
I scrolled through some of the disagreeing comments on Making Effective Altruism Enormous, and tried to examine if any have the implicit assumptions you state:
It is a simple matter to judge who has high quality epistemics
This comment argues that broadening the movement too much will reduce nuance by default. While it implies that EA discussions have more nuance than the average discussion, I do not think the poster or anyone else in the thread says it is easy to identify people with good epistemics. Furthermore, many argue that growth should be not too fast to be able to get people used to EA discussion norms, which implies that people do not necessarily think that bad epistemics are fundamental.
Those with high quality epistemics usually agree on similar things
I don’t think the strong version of this statement (“usually”) holds true for most people in the epistemics camp, some people, including me, would probably agree that e.g. disagreeing with “it is morally better to prioritize expected impact over warm feelings” is usually not good epistemics. I.e. there are some few core tenants for which “those with high quality epistemics” usually agree on.
It’s a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics
While many people probably think it is likely , I do not think the majority consider it “a given”. I could not find a comment that argues or assumes it is obvious in the above discussion.
What I personally believe:
My vague position is that one of the core advantages of the EA community is caring about true arguments, and consequently earnest and open-minded reasoning. In so far that I would complain about bad epistemics, it is definitely not that people are dumb. Rather, I think that it is a danger that people engage a bit more in what seems like motivated reasoning in some discussions than the EA average, and seem less interested in understanding other people’s position and changing their mind. These are gradual differences, I do not mean to imply that there is a camp who reasons perfectly and impartially, and another one that does not.
Without fleshing out my opinion too much (the goal of this comment is not to defend my position), I usually point to the thought experiment: “What would have happened if Eliezer Yudkowsky wrote ai safety posts on the machine learning subreddit?” to illustrate how important having an open-minded and curious community can be.
For example, in your post you posit three implicit assumptions, and later link to a single comment as justification. And tbf, that comment reads a little bit dismissive, but I don’t think it actually carries the three assumptions you outline, and should not be used to represent a whole “camp”, especially since this debate was very heated on both sides. It is not really visible that you try to charitably interpret the position you disagree with. And while it’s good that you clearly state that something is an emotional reaction, I think it would also be good if that reaction is accompanied with a better attempt to understand the other side.
You make some great points here. I’ll admit my arguments weren’t as charitable as they should’ve been, and more motivated from heat than light.
I hope to find time to explore this in more detail and with more charity!
Your point about genuine truth seeking is certainly something I love about EA, and don’t want to see go away. It’s definitely a risk if we can’t figure out how to screen for that sort of thing.
Do you have any recommendations for screening based on epistemics?
Those with high quality epistemics usually agree on similar things
On factual questions, this is how it should be, and this matters. Putting it another way, it’s not a problem for EAs to come to agree on factual questions, without more assumptions.
Hey Wil,
as someone who is likely in the “declining epistemics would be bad” camp, I will try to write this reply while mindfully attempting to be better at epistemics than I usually am.
Let’s start with some points where you hit on something true:
I agree that talk about bad epistemics can come across as being unwelcoming to newcomers and considering them stupid. Coupled with the elitist vibe many people get from EA, this is not great.
I also agree that many people will read the position you describe as implying “I am smarter than you”, and people making that argument should be mindful of this, and think about how to avoid giving this impression.
You cite as one of the implied assumptions:
I think it is indeed a danger that “quality epistemics” is sometimes used as a shortcut to defend things mindlessly. In an EA context, I often disagreed with arguments that defer strongly to experts in EA orgs. These arguments vaguely seem to neglect that these experts might have systematic biases qua working in those orgs. Personally, I probably sometimes use “bad epistemics” as a cached thought internally when encountering a position for which I have mostly seen arguments that I found unconvincing in the past.
Now for the parts I disagree with:
I scrolled through some of the disagreeing comments on Making Effective Altruism Enormous, and tried to examine if any have the implicit assumptions you state:
I don’t think the strong version of this statement (“usually”) holds true for most people in the epistemics camp, some people, including me, would probably agree that e.g. disagreeing with “it is morally better to prioritize expected impact over warm feelings” is usually not good epistemics. I.e. there are some few core tenants for which “those with high quality epistemics” usually agree on.
While many people probably think it is likely , I do not think the majority consider it “a given”. I could not find a comment that argues or assumes it is obvious in the above discussion.
What I personally believe:
My vague position is that one of the core advantages of the EA community is caring about true arguments, and consequently earnest and open-minded reasoning. In so far that I would complain about bad epistemics, it is definitely not that people are dumb. Rather, I think that it is a danger that people engage a bit more in what seems like motivated reasoning in some discussions than the EA average, and seem less interested in understanding other people’s position and changing their mind. These are gradual differences, I do not mean to imply that there is a camp who reasons perfectly and impartially, and another one that does not.
Without fleshing out my opinion too much (the goal of this comment is not to defend my position), I usually point to the thought experiment: “What would have happened if Eliezer Yudkowsky wrote ai safety posts on the machine learning subreddit?” to illustrate how important having an open-minded and curious community can be.
For example, in your post you posit three implicit assumptions, and later link to a single comment as justification. And tbf, that comment reads a little bit dismissive, but I don’t think it actually carries the three assumptions you outline, and should not be used to represent a whole “camp”, especially since this debate was very heated on both sides. It is not really visible that you try to charitably interpret the position you disagree with. And while it’s good that you clearly state that something is an emotional reaction, I think it would also be good if that reaction is accompanied with a better attempt to understand the other side.
You make some great points here. I’ll admit my arguments weren’t as charitable as they should’ve been, and more motivated from heat than light.
I hope to find time to explore this in more detail and with more charity!
Your point about genuine truth seeking is certainly something I love about EA, and don’t want to see go away. It’s definitely a risk if we can’t figure out how to screen for that sort of thing.
Do you have any recommendations for screening based on epistemics?
On factual questions, this is how it should be, and this matters. Putting it another way, it’s not a problem for EAs to come to agree on factual questions, without more assumptions.