Yeah you’re right, it does seem separate, although sort of an adjacent problem? I think the larger problem here is something like “EA opinions are influenced by other EAs more than I’d like them to be”. Over-deference and filter bubbles are two ways where I think getting too sucked into EA can create bad epistemics.
I didn’t mean to call out MIRI specifically, and just tried to choose an EA org where I could picture filter bubbles happening (since MIRI seems pretty isolated from other places). I know very little about what MIRI work *actually* looks like. I’ll change the original comment to reflect this.
Yeah you’re right, it does seem separate, although sort of an adjacent problem? I think the larger problem here is something like “EA opinions are influenced by other EAs more than I’d like them to be”. Over-deference and filter bubbles are two ways where I think getting too sucked into EA can create bad epistemics.
I didn’t mean to call out MIRI specifically, and just tried to choose an EA org where I could picture filter bubbles happening (since MIRI seems pretty isolated from other places). I know very little about what MIRI work *actually* looks like. I’ll change the original comment to reflect this.