I love this idea. Hoping you’ll catch some deference cycles in the survey for the lols ^^
There’s also this pernicious thing that people fall pray too, where they think they’re forming an independent model of this because they only update on “gears-level evidence”. Unfortunately, when someone tells you “AI is N years away because XYZ technical reasons,” you may think you’re updating on the technical reasons, but your brain was actually just using XYZ as excuses to defer to them.
Adding to the trouble is the fact that arguments XYZ have probably gone through strong filters to reach your attention. Would the person give you the counterarguments if they knew them? How did you happen to land in a conversation with this person? Was it because you sought out “expert advice” from “FOOM AI Timelines Experts Hotline”?
When someone gives you gears-level evidence, and you update on their opinion because of that, that can still constitute deferring. What you think of as gears-level evidence is nearly always disguised testimonial evidence. At least to some, usually damning, degree. And unless you’re unusually socioepistemologically astute, you’re just lost to the process.
Unfortunately, when someone tells you “AI is N years away because XYZ technical reasons,” you may think you’re updating on the technical reasons, but your brain was actually just using XYZ as excuses to defer to them.
I really like this point. I’m guilty of having done something like this loads myself.
When someone gives you gears-level evidence, and you update on their opinion because of that, that still constitutes deferring. What you think of as gears-level evidence is nearly always disguised testimonial evidence. At least to some, usually damning, degree. And unless you’re unusually socioepistemologically astute, you’re just lost to the process.
If it’s easy, could you try to put this another way? I’m having trouble making sense of what exactly you mean, and it seems like an important point if true.
“When someone gives you gears-level evidence, and you update on their opinion because of that, that still constitutes deferring.”
This was badly written. I just mean that if you update on their opinion as opposed to just taking the patterns & trying to adjust for the fact that you received them through filters, is updating on testimony. I’m saying nothing special here, just that you might be tricking yourself into deferring (instead of impartially evaluating patterns) by letting the gearsy arguments woozle you.
If you want to know whether string theory is true and you’re not able to evaluate the technical arguments yourself, who do you go to for advice? Well, seems obvious. Ask the experts. They’re likely the most informed on the issue. Unfortunately, they’ve also been heavily selected for belief in the hypothesis. It’s unlikely they’d bother becoming string theorists in the first place unless they believed in it.
If you want to know whether God exists, who do you ask? Philosophers of religion agree: 70% accept or lean towards theism compared to 16% of all PhilPaper Survey respondents.
If you want to know whether to take transformative AI seriously, what now?
I was short on time today and hurriedly wrote my own comment reply to Sam here before I forgot my point so it’s not concise and let me know if any of it is unclear.
Your comment also better describes a kind of problem I was trying to get at, though I’ll post again an excerpt of my testimony that dovetails with what you’re saying:
I remember when I was following conversations more like this a few years ago in 2018 that there was some threshold for AI capabilities over a dozen people I talked to saying it would be imminently achieved. When I asked why they thought that, they said they knew a lot of smart people they trust saying it. I talked to a couple of them and they said a bunch of smart people they know were saying it and heard it from Demis Hassabis from DeepMind. I forget what it was but Hassabis was right because it happened around a year later.
What stuck with me is how almost nobody could or would explain their reasoning. Maybe there is way more value to deference as implicit trust in individuals, groups or semi-transparent processes. Yet the reason why Eliezer Yudkowsky, Ajeya Cotra or Hassabis is because they have a process. At the least, more of the alignment community would need to understand those processes instead of having faith in a few people who probably don’t want the rest of the community deferring to them that much. It appears the problem has only gotten worse.
I’m not sure. I used to call it “technical” and “testimonial evidence” before I encountered “gears-level” on LW. While evidence is just evidence and Bayesian updating stays the same, it’s usefwl to distinguish between these two categories because if you have a high-trust community that frequently updates on each others’ opinions, you risk information cascades and double-counting of evidence.
Information cascades develop consistently in a laboratory situation [for naively rational reasons, in which other incentives to go along with the crowd are minimized]. Some decision sequences result in reverse cascades, where initial misrepresentative signals start a chain of incorrect [but naively rational] decisions that is not broken by more representative signals received later. - (Anderson & Holt, 1998)
Additionally, if your model of a thing has has “gears”, then there are multiple things about the physical world that, if you saw them change, it would change your expectations about the thing.
Let’s say you’re talking to someone you think is smarter than you. You start out with different estimates and different models that produce those estimates. From Ben Pace’s a Sketch of Good Communication:
Here you can see both blue and red has gears. And since you think their estimate is likely to be much better than yours, and you want get some of that amazing decision-guiding power, you throw out your model and adopt their estimate (cuz you don’t understand or don’t have all the parts of their model):
Here, you have “destructively deferred” in order to arrive at your interlocutor’s probability estimate. Basically zombified. You no longer have any gears, even if the accuracy of your estimate has potentially increased a little.
An alternative is to try to hold your all-things-considered estimates separate from your independent impressions (that you get from your models). But this is often hard and confusing, and they bleed into each other over time.
I love this idea. Hoping you’ll catch some deference cycles in the survey for the lols ^^
There’s also this pernicious thing that people fall pray too, where they think they’re forming an independent model of this because they only update on “gears-level evidence”. Unfortunately, when someone tells you “AI is N years away because XYZ technical reasons,” you may think you’re updating on the technical reasons, but your brain was actually just using XYZ as excuses to defer to them.
Adding to the trouble is the fact that arguments XYZ have probably gone through strong filters to reach your attention. Would the person give you the counterarguments if they knew them? How did you happen to land in a conversation with this person? Was it because you sought out “expert advice” from “FOOM AI Timelines Experts Hotline”?
When someone gives you gears-level evidence, and you update on their opinion because of that, that can still constitute deferring. What you think of as gears-level evidence is nearly always disguised testimonial evidence. At least to some, usually damning, degree. And unless you’re unusually socioepistemologically astute, you’re just lost to the process.
I really like this point. I’m guilty of having done something like this loads myself.
If it’s easy, could you try to put this another way? I’m having trouble making sense of what exactly you mean, and it seems like an important point if true.
This was badly written. I just mean that if you update on their opinion as opposed to just taking the patterns & trying to adjust for the fact that you received them through filters, is updating on testimony. I’m saying nothing special here, just that you might be tricking yourself into deferring (instead of impartially evaluating patterns) by letting the gearsy arguments woozle you.
I wrote a bit about how testimonial evidence can be “filtered” in the paradox of expert opinion:
I was short on time today and hurriedly wrote my own comment reply to Sam here before I forgot my point so it’s not concise and let me know if any of it is unclear.
https://forum.effectivealtruism.org/posts/FtggfJ2oxNSN8Niix/when-reporting-ai-timelines-be-clear-who-you-re-not?commentId=M5GucobHBPKyF53sa
Your comment also better describes a kind of problem I was trying to get at, though I’ll post again an excerpt of my testimony that dovetails with what you’re saying:
What’s the best post to read to learn about how EAs conceive of “gears-level understanding”/”gears-level evidence”?
I’m not sure. I used to call it “technical” and “testimonial evidence” before I encountered “gears-level” on LW. While evidence is just evidence and Bayesian updating stays the same, it’s usefwl to distinguish between these two categories because if you have a high-trust community that frequently updates on each others’ opinions, you risk information cascades and double-counting of evidence.
Additionally, if your model of a thing has has “gears”, then there are multiple things about the physical world that, if you saw them change, it would change your expectations about the thing.
Let’s say you’re talking to someone you think is smarter than you. You start out with different estimates and different models that produce those estimates. From Ben Pace’s a Sketch of Good Communication:
Here you can see both blue and red has gears. And since you think their estimate is likely to be much better than yours, and you want get some of that amazing decision-guiding power, you throw out your model and adopt their estimate (cuz you don’t understand or don’t have all the parts of their model):
Here, you have “destructively deferred” in order to arrive at your interlocutor’s probability estimate. Basically zombified. You no longer have any gears, even if the accuracy of your estimate has potentially increased a little.
An alternative is to try to hold your all-things-considered estimates separate from your independent impressions (that you get from your models). But this is often hard and confusing, and they bleed into each other over time.