tl;dr, I think deference is more concerning for EA than other cultures. Relative to how much we should expect EAs to defer, they defer way too much.
1) We should expect EA to have much less deference culture than other cultures, since a lot of EA claims are based on things like answers to philosophical questions, long term future predictions, ect. These kinds of things are really hard to answer, and I don’t think it’s the case that most experts have a much better shot at answering these than some relatively smart and quantitative University students. Questions about moral philosophy are the exact kinds of questions you expect to have a super wide range of answers to, so the number of EAs that claim they’re longtermist is kind of surprising and unexpected. I think this is a sign there’s more deference than their should be.
On the other hand, for more concrete and established scientific fields where experts do have a much better chance at making decisions than students, it makes way more sense to defer to them about what things are important.
2) EAs are optimizing for altruism, so decisions on what to work on require lots of thought. I’m guessing most non-EA people choose to work on things they enjoy or are emotionally invested in.
I can easily tell you, without any evidence or deference, what things I thing are fun and am emotionally invested in. But it takes a lot more time and research to come up with what I think is the most impactful.
I think EAs having more evidence and reasoning to back up what we’re working on just naturally arises from being an EA, and doesn’t necessarily mean we have better epistemics than other communities.
3) Explicitly saying when you’re deferring to someone seems like it does a better job of convincing people “wow! these EA people seem more correct than most other communities” and does a worse job at actually being more correct than most other communities. Being explicit about when we defer to people still means we might defer way too much.
4) Edit: I think this point is not actually about deference. Also, I know very little about MIRI and have no idea if this is in any way realistic. I’m guessing you could replace MIRI with some other org and this kind of story would be true, but I’m not totally sure.
Also, idk I feel like some things that look like original, detailed thinking actually just ends up being closer to deference than I’d like. I think perhaps a story that’s happened before is “MIRI researcher thinks hard about AI stuff, and comes up with some original thoughts with lots of evidence. Writes on alignment form. Tons of karma, yay.”
Sure, the thinking is original, has evidence to back it up, and looks really nice, pretty, and useful. That being said, even if this is original thinking, I’m guessing if you looked at how this person was using the opinions of other people to shape their own opinions, it would look like
Talking to other MIRI people − 80%
Talking to non-MIRI EAs − 10%
Reading books/opinions written by non-EAs relevant to what they’re working on − 5%
Talking to non-EAs − 5%
So even if this thinking looks really original and intelligent, this still seems like a problem with deference. Not deferring to other MIRI researchers an unhealthy amount probably looks more like getting more insight from mainstream academia and non-EAs.
I guess the point here is that it’s much easier to look like you’re not deferring to people too much than to actually not defer to people too much.
5) I think people in general defer way too much and do not think hard enough about what to work on. I think EAs defer too much and occassionally don’t think hard enough about what to work on. Being better than the latter doesn’t really mean I’m satisfied with the former.
FWIW I agree that EAs should probably defer less on average. So e.g. I agree with your point 5.
I don’t like the example you gave about MIRI—I think filter bubbles & related issues are real problems but distinct from deference; nothing in the example you gave seems like deference to me. (Also, in my experience the people from MIRI defer less than pretty much anyone in EA. If anyone is deferring too little, it’s them.)
Yeah you’re right, it does seem separate, although sort of an adjacent problem? I think the larger problem here is something like “EA opinions are influenced by other EAs more than I’d like them to be”. Over-deference and filter bubbles are two ways where I think getting too sucked into EA can create bad epistemics.
I didn’t mean to call out MIRI specifically, and just tried to choose an EA org where I could picture filter bubbles happening (since MIRI seems pretty isolated from other places). I know very little about what MIRI work *actually* looks like. I’ll change the original comment to reflect this.
tl;dr, I think deference is more concerning for EA than other cultures. Relative to how much we should expect EAs to defer, they defer way too much.
1) We should expect EA to have much less deference culture than other cultures, since a lot of EA claims are based on things like answers to philosophical questions, long term future predictions, ect. These kinds of things are really hard to answer, and I don’t think it’s the case that most experts have a much better shot at answering these than some relatively smart and quantitative University students. Questions about moral philosophy are the exact kinds of questions you expect to have a super wide range of answers to, so the number of EAs that claim they’re longtermist is kind of surprising and unexpected. I think this is a sign there’s more deference than their should be.
On the other hand, for more concrete and established scientific fields where experts do have a much better chance at making decisions than students, it makes way more sense to defer to them about what things are important.
2) EAs are optimizing for altruism, so decisions on what to work on require lots of thought. I’m guessing most non-EA people choose to work on things they enjoy or are emotionally invested in.
I can easily tell you, without any evidence or deference, what things I thing are fun and am emotionally invested in. But it takes a lot more time and research to come up with what I think is the most impactful.
I think EAs having more evidence and reasoning to back up what we’re working on just naturally arises from being an EA, and doesn’t necessarily mean we have better epistemics than other communities.
3) Explicitly saying when you’re deferring to someone seems like it does a better job of convincing people “wow! these EA people seem more correct than most other communities” and does a worse job at actually being more correct than most other communities. Being explicit about when we defer to people still means we might defer way too much.
4) Edit: I think this point is not actually about deference. Also, I know very little about MIRI and have no idea if this is in any way realistic. I’m guessing you could replace MIRI with some other org and this kind of story would be true, but I’m not totally sure.
Also, idk I feel like some things that look like original, detailed thinking actually just ends up being closer to deference than I’d like. I think perhaps a story that’s happened before is “MIRI researcher thinks hard about AI stuff, and comes up with some original thoughts with lots of evidence. Writes on alignment form. Tons of karma, yay.”
Sure, the thinking is original, has evidence to back it up, and looks really nice, pretty, and useful. That being said, even if this is original thinking, I’m guessing if you looked at how this person was using the opinions of other people to shape their own opinions, it would look like
Talking to other MIRI people − 80%
Talking to non-MIRI EAs − 10%
Reading books/opinions written by non-EAs relevant to what they’re working on − 5%
Talking to non-EAs − 5%
So even if this thinking looks really original and intelligent, this still seems like a problem with deference. Not deferring to other MIRI researchers an unhealthy amount probably looks more like getting more insight from mainstream academia and non-EAs.
I guess the point here is that it’s much easier to look like you’re not deferring to people too much than to actually not defer to people too much.
5) I think people in general defer way too much and do not think hard enough about what to work on. I think EAs defer too much and occassionally don’t think hard enough about what to work on. Being better than the latter doesn’t really mean I’m satisfied with the former.
FWIW I agree that EAs should probably defer less on average. So e.g. I agree with your point 5.
I don’t like the example you gave about MIRI—I think filter bubbles & related issues are real problems but distinct from deference; nothing in the example you gave seems like deference to me. (Also, in my experience the people from MIRI defer less than pretty much anyone in EA. If anyone is deferring too little, it’s them.)
Yeah you’re right, it does seem separate, although sort of an adjacent problem? I think the larger problem here is something like “EA opinions are influenced by other EAs more than I’d like them to be”. Over-deference and filter bubbles are two ways where I think getting too sucked into EA can create bad epistemics.
I didn’t mean to call out MIRI specifically, and just tried to choose an EA org where I could picture filter bubbles happening (since MIRI seems pretty isolated from other places). I know very little about what MIRI work *actually* looks like. I’ll change the original comment to reflect this.