These seem like broadly reasonable heuristics, but they kick the can on who is an expert, which is where most of the challenge in deference lies.
The canonical (recent) example of this is COVID, when doctors and epidemiologists, who were perceived by the general public as the relevant experts, weighed in on questions of public policy, in many cases giving the impression of consensus in their communities. I think there is a good argument to be made that public policy “experts” were in fact better-placed to give recommendations in many of these issues. Regardless, it wasn’t at all clear at the time, at least immediately, where the relevant expertise lay.
You might say that this is a problem only for the relatively uninformed, but it seems like it’s an open issue in lots of domains. Depending on how you characterize the expert population, it seems reasonable to me to presume that the number of people who believe AI risk is worth working on could be sub-Lizardman. You might say that this assumes too broad a definition of experts—but this begs the question. Field boundaries are porous, so the question of whether a field is “sound” is itself ill-defined.
These seem like broadly reasonable heuristics, but they kick the can on who is an expert, which is where most of the challenge in deference lies.
The canonical (recent) example of this is COVID, when doctors and epidemiologists, who were perceived by the general public as the relevant experts, weighed in on questions of public policy, in many cases giving the impression of consensus in their communities. I think there is a good argument to be made that public policy “experts” were in fact better-placed to give recommendations in many of these issues. Regardless, it wasn’t at all clear at the time, at least immediately, where the relevant expertise lay.
You might say that this is a problem only for the relatively uninformed, but it seems like it’s an open issue in lots of domains. Depending on how you characterize the expert population, it seems reasonable to me to presume that the number of people who believe AI risk is worth working on could be sub-Lizardman. You might say that this assumes too broad a definition of experts—but this begs the question. Field boundaries are porous, so the question of whether a field is “sound” is itself ill-defined.
True, it can be a bit of a challenge to use this heuristic.
Though do note my 3rd caveat, it isn’t a replacement for EV calculations, and a lot of high impact areas have this issue.