I’m not sure. I used to call it “technical” and “testimonial evidence” before I encountered “gears-level” on LW. While evidence is just evidence and Bayesian updating stays the same, it’s usefwl to distinguish between these two categories because if you have a high-trust community that frequently updates on each others’ opinions, you risk information cascades and double-counting of evidence.
Information cascades develop consistently in a laboratory situation [for naively rational reasons, in which other incentives to go along with the crowd are minimized]. Some decision sequences result in reverse cascades, where initial misrepresentative signals start a chain of incorrect [but naively rational] decisions that is not broken by more representative signals received later. - (Anderson & Holt, 1998)
Additionally, if your model of a thing has has “gears”, then there are multiple things about the physical world that, if you saw them change, it would change your expectations about the thing.
Let’s say you’re talking to someone you think is smarter than you. You start out with different estimates and different models that produce those estimates. From Ben Pace’s a Sketch of Good Communication:
Here you can see both blue and red has gears. And since you think their estimate is likely to be much better than yours, and you want get some of that amazing decision-guiding power, you throw out your model and adopt their estimate (cuz you don’t understand or don’t have all the parts of their model):
Here, you have “destructively deferred” in order to arrive at your interlocutor’s probability estimate. Basically zombified. You no longer have any gears, even if the accuracy of your estimate has potentially increased a little.
An alternative is to try to hold your all-things-considered estimates separate from your independent impressions (that you get from your models). But this is often hard and confusing, and they bleed into each other over time.
What’s the best post to read to learn about how EAs conceive of “gears-level understanding”/”gears-level evidence”?
I’m not sure. I used to call it “technical” and “testimonial evidence” before I encountered “gears-level” on LW. While evidence is just evidence and Bayesian updating stays the same, it’s usefwl to distinguish between these two categories because if you have a high-trust community that frequently updates on each others’ opinions, you risk information cascades and double-counting of evidence.
Additionally, if your model of a thing has has “gears”, then there are multiple things about the physical world that, if you saw them change, it would change your expectations about the thing.
Let’s say you’re talking to someone you think is smarter than you. You start out with different estimates and different models that produce those estimates. From Ben Pace’s a Sketch of Good Communication:
Here you can see both blue and red has gears. And since you think their estimate is likely to be much better than yours, and you want get some of that amazing decision-guiding power, you throw out your model and adopt their estimate (cuz you don’t understand or don’t have all the parts of their model):
Here, you have “destructively deferred” in order to arrive at your interlocutor’s probability estimate. Basically zombified. You no longer have any gears, even if the accuracy of your estimate has potentially increased a little.
An alternative is to try to hold your all-things-considered estimates separate from your independent impressions (that you get from your models). But this is often hard and confusing, and they bleed into each other over time.