I’ve hypothesized that one potential failure mode is that experts are not used to communicating with EA audiences, and EA audiences tend to be more critical/skeptical of ideas (on a rational level). Thus, it may be the case that experts aren’t always as explicit about some of the concerns or issues, perhaps because they expect their audiences to defer to them or they have a model of what things people will be skeptical of and thus that they need to defend/explain, but that audience model doesn’t apply well to EA. I think there may be a case/example to highlight with regards to nuclear weapons or international relations, but then again it is also possible that the EA skepticism in some of these cases is valid due to higher emphasis on existential risks rather than smaller risks.
I’ve hypothesized that one potential failure mode is that experts are not used to communicating with EA audiences, and EA audiences tend to be more critical/skeptical of ideas (on a rational level). Thus, it may be the case that experts aren’t always as explicit about some of the concerns or issues, perhaps because they expect their audiences to defer to them or they have a model of what things people will be skeptical of and thus that they need to defend/explain, but that audience model doesn’t apply well to EA. I think there may be a case/example to highlight with regards to nuclear weapons or international relations, but then again it is also possible that the EA skepticism in some of these cases is valid due to higher emphasis on existential risks rather than smaller risks.