I found the framing of “Is this community better-informed relative to what disagreers expect?” new and useful, thank you!
To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on “EA tenets”. For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it’s fine that we were “unusual from the get-go” regarding this prior. (This is of course not very epistemically modest of them.)
Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:
It’s kind of hard to determine whether EA beliefs are weird because we were weird from the get-go or because we did some novel piece of research/thinking. For example, was Toby Ord concerned about x-risks in 2009 because he had unusual priors or because he had thought about novel considerations that are obscure to outsiders? People would probably introduce their own biases while making this judgment. I think you could even try to make an argument like this about polyamory.
People probably generally think a community is better-informed than expected when spending more time engaging with it. At least this is what I see empirically. So for people who’ve engaged a lot with EA, your policy of updating towards EA beliefs if EA seems better-informed than expected probably leads to deferring asymmetrically more to EA than other communities. Since they will have engaged less with other communities. (Ofc you could try to consciously correct for that.)
I overall often have the concern with EA beliefs that “maybe most big ideas are wrong”, just like most big ideas have been wrong throughout history. In this frame, our little inside pet theories and EA research provide almost no Bayesian information (because they are likely to be wrong) and it makes sense to closely stick to whatever seems most “common sense” or “established”. But I’m not well-calibrated on how true “most big ideas are wrong” is. (This point is entirely compatible with what you said in the post but it changes the magnitude of updates you’d make.)
Side-note: I found this post super hard to parse and would’ve appreciated it a lot if it was more clearly written!
Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written.
On the point about “EA tenets”: if you mean normative tenets, then yes, how much you want to update on others’ views on that front might be different from how much you want to update on others’ empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing—along the lines of this post, say) or more like preferences (in which case you don’t). My own guess is that they’re more like beliefs—i.e. we should take the fact that most people reject temporal impartiality as at least some evidence against longtermism—but thanks for noting that there’s a distinction one might want to make here.
On the three bullet points: I agree with the worries on all counts! As you sort of note, these could be seen as difficulties with “implementing the policy” appropriately, rather than problems with the policy in the abstract, and that is how I see them. But I take the point that if an idea is hard enough to implement then there might not be much practically to be learned from it.
I found the framing of “Is this community better-informed relative to what disagreers expect?” new and useful, thank you!
To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on “EA tenets”. For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it’s fine that we were “unusual from the get-go” regarding this prior. (This is of course not very epistemically modest of them.)
Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:
It’s kind of hard to determine whether EA beliefs are weird because we were weird from the get-go or because we did some novel piece of research/thinking. For example, was Toby Ord concerned about x-risks in 2009 because he had unusual priors or because he had thought about novel considerations that are obscure to outsiders? People would probably introduce their own biases while making this judgment. I think you could even try to make an argument like this about polyamory.
People probably generally think a community is better-informed than expected when spending more time engaging with it. At least this is what I see empirically. So for people who’ve engaged a lot with EA, your policy of updating towards EA beliefs if EA seems better-informed than expected probably leads to deferring asymmetrically more to EA than other communities. Since they will have engaged less with other communities. (Ofc you could try to consciously correct for that.)
I overall often have the concern with EA beliefs that “maybe most big ideas are wrong”, just like most big ideas have been wrong throughout history. In this frame, our little inside pet theories and EA research provide almost no Bayesian information (because they are likely to be wrong) and it makes sense to closely stick to whatever seems most “common sense” or “established”. But I’m not well-calibrated on how true “most big ideas are wrong” is. (This point is entirely compatible with what you said in the post but it changes the magnitude of updates you’d make.)
Side-note: I found this post super hard to parse and would’ve appreciated it a lot if it was more clearly written!
Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written.
On the point about “EA tenets”: if you mean normative tenets, then yes, how much you want to update on others’ views on that front might be different from how much you want to update on others’ empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing—along the lines of this post, say) or more like preferences (in which case you don’t). My own guess is that they’re more like beliefs—i.e. we should take the fact that most people reject temporal impartiality as at least some evidence against longtermism—but thanks for noting that there’s a distinction one might want to make here.
On the three bullet points: I agree with the worries on all counts! As you sort of note, these could be seen as difficulties with “implementing the policy” appropriately, rather than problems with the policy in the abstract, and that is how I see them. But I take the point that if an idea is hard enough to implement then there might not be much practically to be learned from it.