Interesting suggestion. I don’t think anyone’s advocating for using reasoning without evidence (called ‘a priori reasoning’), nor does anyone think that we should truly only reimplement interventions performed in particular studies without extrapolating at all. People like the future of humanity institute, in particular, are seeking to generalise from evidence in a principled way. So the question is really ‘of ways to use reason to generalise from existing evidence, which is best?’ It seems counterproductive to try to divide people who are all fundamentally trying to answer this same question into different camps.
Bayesian rationality is one compelling answer to the ‘how do we apply reason to evidence?’ that has some advantages:
Bayesian stats is not the panacea of logic it is often held out to be; I say this as someone who practices statistics for the purpose of social betterment (see e.g. https://projects.propublica.org/surgeons/ for an example of what I get up to)
First, my experience is that quantification is really, really hard. Here are a few reasons why.
I have seen few discussions, within EA, of the logistics of data collection in developing countries, which is a HUGE problem. For example, how do you get people to talk to you? How do you know if they’re telling you the truth? These folks have often talked to wave after wave of well meaning foreigners over their lives and would rather ignore or lie to you and your careful survey. The people I know who actually collect data in field have all sorts of nasty things to say about the realities of working in fluid environments.
Even worse: for a great many outcomes there just ISN’T a way to get good indicator data. Consider the problem of attribution of outcomes to interventions. We can’t even reliably solve the problem of attributing a purchase to an ad in the digital advertising industry, where all actions are online and therefore recorded somewhere. How then do we solve attribution at the social intervention level? The answers revolve around things like theories of change and qualitative indicators, neither of which the EA community takes seriously. But often this is the ONLY type of evidence we can get.
Second, Bayesian stats is built entirely on a single equation that follows from the axioms of probability. All of this update, learning, rationality stuff is an interpretation we put on top of it. Andrew Gelman and Cosma Shalizi have the clearest exposition of this, from “Philosophy and the Practice of Bayesian Statistics”,
“A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.”
Bayesianism is not rationality. It’s a particular mathematical model of rationality. I like to analogize it to propositional logic: it captures some important features of successful thinking, but it’s clearly far short of the whole story.
We need much more sophisticated frameworks for analytical thinking. This is my favorite general purpose approach, which applies to mixed quant/qual evidence, and was developed by consideration of cognitive biases at the CIA:
Interesting suggestion. I don’t think anyone’s advocating for using reasoning without evidence (called ‘a priori reasoning’), nor does anyone think that we should truly only reimplement interventions performed in particular studies without extrapolating at all. People like the future of humanity institute, in particular, are seeking to generalise from evidence in a principled way. So the question is really ‘of ways to use reason to generalise from existing evidence, which is best?’ It seems counterproductive to try to divide people who are all fundamentally trying to answer this same question into different camps.
Bayesian rationality is one compelling answer to the ‘how do we apply reason to evidence?’ that has some advantages:
it allows quantification of beliefs,
it allows quantification of strength of evidence,
it’s unexploitable in betting
Bayesian stats is not the panacea of logic it is often held out to be; I say this as someone who practices statistics for the purpose of social betterment (see e.g. https://projects.propublica.org/surgeons/ for an example of what I get up to)
First, my experience is that quantification is really, really hard. Here are a few reasons why.
I have seen few discussions, within EA, of the logistics of data collection in developing countries, which is a HUGE problem. For example, how do you get people to talk to you? How do you know if they’re telling you the truth? These folks have often talked to wave after wave of well meaning foreigners over their lives and would rather ignore or lie to you and your careful survey. The people I know who actually collect data in field have all sorts of nasty things to say about the realities of working in fluid environments.
Even worse: for a great many outcomes there just ISN’T a way to get good indicator data. Consider the problem of attribution of outcomes to interventions. We can’t even reliably solve the problem of attributing a purchase to an ad in the digital advertising industry, where all actions are online and therefore recorded somewhere. How then do we solve attribution at the social intervention level? The answers revolve around things like theories of change and qualitative indicators, neither of which the EA community takes seriously. But often this is the ONLY type of evidence we can get.
Second, Bayesian stats is built entirely on a single equation that follows from the axioms of probability. All of this update, learning, rationality stuff is an interpretation we put on top of it. Andrew Gelman and Cosma Shalizi have the clearest exposition of this, from “Philosophy and the Practice of Bayesian Statistics”,
“A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.”
Bayesianism is not rationality. It’s a particular mathematical model of rationality. I like to analogize it to propositional logic: it captures some important features of successful thinking, but it’s clearly far short of the whole story.
We need much more sophisticated frameworks for analytical thinking. This is my favorite general purpose approach, which applies to mixed quant/qual evidence, and was developed by consideration of cognitive biases at the CIA:
https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art11.html
But of course this isn’t rationality either. It’s never been codified completely, and probably cannot be.