A technical note: Bayesianism is not logic, statistics is not rationality
Perhaps I am beating a dead horse for this community, but this is a very lucid explanation of what probabilistic/statistical reasoning cannot do. Namely: first order logic. There’s really no way of encoding relations or quantifiers into purely Bayesian inference, which actually makes it quite weak in terms of model building.
Further, integrating probability and logic is a huge unsolved problem! We actually have very little idea how to combine our two greatest successes in formalizing rationality.
I found this tremendously clarifying, though not immediately useful. But it has definitely broadened my thinking.
I think MIRI reported making a big breakthrough on this.
And here it is: https://intelligence.org/2016/09/12/new-paper-logical-induction/
I don’t think this sort of post is particularly relevant to the EA forum. It’s about probability and logic, not altruism.
It feels to me like inclusion should be based on plausible impact, whether direct or indirect, rather than immediate apparent relevance to effective altruism. If this essay improves our thinking, and makes the effective altruist movement better at a rate that’s comparable to the other stuff posted here, then it’s a valuable post.
* I might be a little biased because I think EA should be prioritizing epistemic rationality much more highly.
I agree with this pretty strongly. But also I think authors have to make an effort to bridge the gap with intermediate steps in their reasoning, rather than pouring unexplained insights—however genius they may be—onto a bewildered reader.
Examining the foundations of the practical reasoning used (and seemingly taken for granted) by many EAs seems highly relevant. Wish we saw more of this kind of thing.
It’s a little indirect, but it’s a link to a nice essay on a topic which is relevant when we get to the “working stuff out” side of effective altruism.