I partly agree with Nathan’s post, for a few reasons:
If Alice believes X because she trusts that Bob looked into it, then it’s useful for Alice to note her reason. Otherwise, you can get bad situations like ‘Bob did not in fact look into X, but he observes Alice’s confidence and concludes that she must have looked into it, so he takes X for granted too and Alice never realizes why’. This isn’t a big problem in two-person groups, but can lead to a lot of double-counted evidence in thousand-person groups.
It’s important to distinguish ‘this feels compelling’ from ‘this is Bayesian evidence about the physical world’. If an argument seems convincing, but would seem equally convincing if it were false, then you shouldn’t actually treat the convincingness as evidence.
Getting the right answer here is important enough, and blind spots and black-swan errors are common enough, that it can make a lot of sense to check your work even in cases where you’d be super surprised to learn you’d been wrong. Getting outside feedback can be a good way to do this.
I’ve noticed that when I worry “what if everything I believe is wrong?”, sometimes it’s a real worry that I’m biased in a specific way, or that I might just be missing something. Other times, it’s more like an urge to be dutifully/performatively skeptical or to get a certain kind of emotional reassurance; see https://equilibriabook.com/toc/ for a good discussion of this.
Re
Arguably this forum kind of does this job, though A) we are all tremendously biased B) are people *really* checking the minutiae? I am not.
I haven’t had any recent massive updates about EA sources’ credibility after seeing a randomized spot check. Which is one way of trying to guess at the expected utility of more marginal spot-checking, vs. putting the same resources into something else.
My main suggestion, though, would be to check out various examples of arguments between EAs, criticisms of EAs by other EAs, etc., and use that to start building a mental model of EA’s epistemic hygiene and likely biases or strengths. “Everyone on the EA Forum must be tremendously biased because otherwise they surely wouldn’t visit the forum” is a weak starting point by comparison; you can’t figure out which groups in the real world are biased (or how much, or in what ways) from your armchair.
I think I know very well where Nathan is coming from, and I don’t think it’s invalid, for the reasons you state among others. But after much wrangling with the same issues, my comment is the only summary statement I’ve ever really been able to make on the matter. He’s just left religion and I feel him on not knowing what to trust—I don’t think there’s any othe place he could be right now.
I suppose what I really wanted to say is that you can never surrender those doubts to anyone else or some external system. You just have to accept that you will make mistakes, stay alert to new information, and stay in touch with what changes in you over time.
+1 to this.
I partly agree with Nathan’s post, for a few reasons:
If Alice believes X because she trusts that Bob looked into it, then it’s useful for Alice to note her reason. Otherwise, you can get bad situations like ‘Bob did not in fact look into X, but he observes Alice’s confidence and concludes that she must have looked into it, so he takes X for granted too and Alice never realizes why’. This isn’t a big problem in two-person groups, but can lead to a lot of double-counted evidence in thousand-person groups.
It’s important to distinguish ‘this feels compelling’ from ‘this is Bayesian evidence about the physical world’. If an argument seems convincing, but would seem equally convincing if it were false, then you shouldn’t actually treat the convincingness as evidence.
Getting the right answer here is important enough, and blind spots and black-swan errors are common enough, that it can make a lot of sense to check your work even in cases where you’d be super surprised to learn you’d been wrong. Getting outside feedback can be a good way to do this.
I’ve noticed that when I worry “what if everything I believe is wrong?”, sometimes it’s a real worry that I’m biased in a specific way, or that I might just be missing something. Other times, it’s more like an urge to be dutifully/performatively skeptical or to get a certain kind of emotional reassurance; see https://equilibriabook.com/toc/ for a good discussion of this.
Re
Some people check some minutiae. The end of https://sideways-view.com/2018/07/08/the-elephant-in-the-brain/ is a cool example that comes to mind.
I haven’t had any recent massive updates about EA sources’ credibility after seeing a randomized spot check. Which is one way of trying to guess at the expected utility of more marginal spot-checking, vs. putting the same resources into something else.
My main suggestion, though, would be to check out various examples of arguments between EAs, criticisms of EAs by other EAs, etc., and use that to start building a mental model of EA’s epistemic hygiene and likely biases or strengths. “Everyone on the EA Forum must be tremendously biased because otherwise they surely wouldn’t visit the forum” is a weak starting point by comparison; you can’t figure out which groups in the real world are biased (or how much, or in what ways) from your armchair.
I think I know very well where Nathan is coming from, and I don’t think it’s invalid, for the reasons you state among others. But after much wrangling with the same issues, my comment is the only summary statement I’ve ever really been able to make on the matter. He’s just left religion and I feel him on not knowing what to trust—I don’t think there’s any othe place he could be right now.
I suppose what I really wanted to say is that you can never surrender those doubts to anyone else or some external system. You just have to accept that you will make mistakes, stay alert to new information, and stay in touch with what changes in you over time.
Yeah, strong upvote to this too.