Unfortunately, when someone tells you “AI is N years away because XYZ technical reasons,” you may think you’re updating on the technical reasons, but your brain was actually just using XYZ as excuses to defer to them.
I really like this point. I’m guilty of having done something like this loads myself.
When someone gives you gears-level evidence, and you update on their opinion because of that, that still constitutes deferring. What you think of as gears-level evidence is nearly always disguised testimonial evidence. At least to some, usually damning, degree. And unless you’re unusually socioepistemologically astute, you’re just lost to the process.
If it’s easy, could you try to put this another way? I’m having trouble making sense of what exactly you mean, and it seems like an important point if true.
“When someone gives you gears-level evidence, and you update on their opinion because of that, that still constitutes deferring.”
This was badly written. I just mean that if you update on their opinion as opposed to just taking the patterns & trying to adjust for the fact that you received them through filters, is updating on testimony. I’m saying nothing special here, just that you might be tricking yourself into deferring (instead of impartially evaluating patterns) by letting the gearsy arguments woozle you.
If you want to know whether string theory is true and you’re not able to evaluate the technical arguments yourself, who do you go to for advice? Well, seems obvious. Ask the experts. They’re likely the most informed on the issue. Unfortunately, they’ve also been heavily selected for belief in the hypothesis. It’s unlikely they’d bother becoming string theorists in the first place unless they believed in it.
If you want to know whether God exists, who do you ask? Philosophers of religion agree: 70% accept or lean towards theism compared to 16% of all PhilPaper Survey respondents.
If you want to know whether to take transformative AI seriously, what now?
I really like this point. I’m guilty of having done something like this loads myself.
If it’s easy, could you try to put this another way? I’m having trouble making sense of what exactly you mean, and it seems like an important point if true.
This was badly written. I just mean that if you update on their opinion as opposed to just taking the patterns & trying to adjust for the fact that you received them through filters, is updating on testimony. I’m saying nothing special here, just that you might be tricking yourself into deferring (instead of impartially evaluating patterns) by letting the gearsy arguments woozle you.
I wrote a bit about how testimonial evidence can be “filtered” in the paradox of expert opinion: