Currently researching how cost-benefit analysis is used in US regulatory decision-making and what this might imply for the regulation of Frontier AI. Supervised by John Halstead (GovAI).
In the past, I’ve done community building and operations at GovAI, CEA, and the SERI ML Alignment Theory Scholars program. My degree is in Computer Science.
I also sometimes worry about the big-picture epistemics of EA à la “Is EA just an ideology like any other?”.
I found the framing of “Is this community better-informed relative to what disagreers expect?” new and useful, thank you!
To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on “EA tenets”. For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it’s fine that we were “unusual from the get-go” regarding this prior. (This is of course not very epistemically modest of them.)
Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:
It’s kind of hard to determine whether EA beliefs are weird because we were weird from the get-go or because we did some novel piece of research/thinking. For example, was Toby Ord concerned about x-risks in 2009 because he had unusual priors or because he had thought about novel considerations that are obscure to outsiders? People would probably introduce their own biases while making this judgment. I think you could even try to make an argument like this about polyamory.
People probably generally think a community is better-informed than expected when spending more time engaging with it. At least this is what I see empirically. So for people who’ve engaged a lot with EA, your policy of updating towards EA beliefs if EA seems better-informed than expected probably leads to deferring asymmetrically more to EA than other communities. Since they will have engaged less with other communities. (Ofc you could try to consciously correct for that.)
I overall often have the concern with EA beliefs that “maybe most big ideas are wrong”, just like most big ideas have been wrong throughout history. In this frame, our little inside pet theories and EA research provide almost no Bayesian information (because they are likely to be wrong) and it makes sense to closely stick to whatever seems most “common sense” or “established”. But I’m not well-calibrated on how true “most big ideas are wrong” is. (This point is entirely compatible with what you said in the post but it changes the magnitude of updates you’d make.)
Side-note: I found this post super hard to parse and would’ve appreciated it a lot if it was more clearly written!