For example, can I do better than just deferring to the “largest and smartest” expert group on “Might AI lead to extinction?” (which seems to be EA). Can I instead look at the arguments and epistemics of EAs versus, say, opposing academics and reach a better conclusion? (Better in the sense of “more likely to be correct”.) If so, how much and how should I do that in the details?
Deference is a major topic in EA. I am currently working on a research project simulating various models of deference.
So far, my findings indicate that deference is a double-edged sword:
You will tend to have more accurate beliefs if you defer to the wisdom of the crowd (or perhaps to a subject-matter expert—I haven’t specifically modeled this yet).
However, remember that others are also likely to defer to you. If they fail to track the difference between your all-things-considered, deferent best guess and the independent observations and evidence you bring to the table, this can inhibit the community’s ability to converge on the truth.
If the community is extremely deferent and if there is about as much uncertainty about what the community’s collective judgment actually is as there is about the object-level question at hand, then it tentatively appears that it’s better even for individual accuracy to be non-deferent. It may be that there are even greater gains to be made just by being less deferent than the group.
Many of these problems can be resolved if the community has a way of aggregating people’s independent (non-deferent) judgments, and only then deferring to that aggregate judgment when making decisions. It seems to me progress can be made in this direction, though I’m skeptical we can come very close to this ideal.
So if your goal is to improve the community’s collective accuracy, it tentatively seems best to focus on articulating your own independent perspective. It is also good to seek this out from others, asking them to not defer and to give their own personal, private perspective.
But when it comes time to make your own decision, then you will want to defer to a large, even extreme extent to the community’s aggregate judgments.
Again, I haven’t included experts (or non-truth-oriented activists) into my model. I am also basing my model on specific assumptions about uncertainty, so there is plenty of generalization from a relatively narrow result going on here.
The idea of deferring to common wisdom while continuing to formulate your own model reminds me EY’s post on Lawful Uncertainty. The focus was an experiment from the 60s where subjects guessed card colors from a deck of 70% blue cards. People keep on trying to guess red based on their own predictions even though the optimal strategy was to always pick blue. EY’s insight which this reminded me of was:
Even if subjects think they’ve come up with a hypothesis, they don’t have to actually bet on that prediction in order to test their hypothesis. They can say, “Now if this hypothesis is correct, the next card will be red”—and then just bet on blue. They can pick blue each time, accumulating as many nickels as they can, while mentally noting their private guesses for any patterns they thought they spotted. If their predictions come out right, then they can switch to the newly discovered sequence.
Deference is a major topic in EA. I am currently working on a research project simulating various models of deference.
So far, my findings indicate that deference is a double-edged sword:
You will tend to have more accurate beliefs if you defer to the wisdom of the crowd (or perhaps to a subject-matter expert—I haven’t specifically modeled this yet).
However, remember that others are also likely to defer to you. If they fail to track the difference between your all-things-considered, deferent best guess and the independent observations and evidence you bring to the table, this can inhibit the community’s ability to converge on the truth.
If the community is extremely deferent and if there is about as much uncertainty about what the community’s collective judgment actually is as there is about the object-level question at hand, then it tentatively appears that it’s better even for individual accuracy to be non-deferent. It may be that there are even greater gains to be made just by being less deferent than the group.
Many of these problems can be resolved if the community has a way of aggregating people’s independent (non-deferent) judgments, and only then deferring to that aggregate judgment when making decisions. It seems to me progress can be made in this direction, though I’m skeptical we can come very close to this ideal.
So if your goal is to improve the community’s collective accuracy, it tentatively seems best to focus on articulating your own independent perspective. It is also good to seek this out from others, asking them to not defer and to give their own personal, private perspective.
But when it comes time to make your own decision, then you will want to defer to a large, even extreme extent to the community’s aggregate judgments.
Again, I haven’t included experts (or non-truth-oriented activists) into my model. I am also basing my model on specific assumptions about uncertainty, so there is plenty of generalization from a relatively narrow result going on here.
The idea of deferring to common wisdom while continuing to formulate your own model reminds me EY’s post on Lawful Uncertainty. The focus was an experiment from the 60s where subjects guessed card colors from a deck of 70% blue cards. People keep on trying to guess red based on their own predictions even though the optimal strategy was to always pick blue. EY’s insight which this reminded me of was:
Your link didn’t get pasted properly. Here it is: Lawful Uncertainty.