Thanks for all the care and effort which went into writing this!
At the same time, while reading, my reactions were most of the time “this seems a bit confused”, “this likely won’t help” or “this seems to miss the fact that there is someone somewhere close to the core EA orgs who understands the topic pretty well, and has a different opinion”.
Unfortunately, to illustrate this in detail for the whole post would be a project for …multiple weeks.
At the same time I thought it could be useful to discuss at least one small part in detail, to illustrate how the actual in-the-detail disagreement could look like.
I’ve decided to write a detailed response for a few paragraphs about rationality and Bayesianism. This is from my perspective not cherry-picked part of the of the original text which is particularly wrong, but a part which seems representatively wrong/confused. I picked it for convenience, because I can argue and reference this particularly easily.
Individual Bayesian Thinking (IBT) is a technique inherited by EA from the Rationalist subculture, where one attempts to use Bayes’ theorem on an everyday basis. You assign each of your beliefs a numerical probability of being true and attempt to mentally apply Bayes’ theorem, increasing or decreasing the probability in question in response to new evidence. This is sometimes called “Bayesian epistemology” in EA, but to avoid confusing it with the broader approach to formal epistemology with the same name we will stick with IBT.
This seems pretty strange characterization. Even though I participated at multiple CFAR events, do teach various ‘rationality techniques’ and know decent amount of stuff about Bayesian inference, I think this is misleading/confused.
What’s called in rationalist circles “Bayesian epistemology” is basically what’s the common understanding of the term: - you don’t hold beliefs to be true or false, but have credences in them - normatively, you should update the credences based on evidence. normatively, the proper rule for that is the Bayes rule; this is intractable in practice, so you do various types of approximations - you should strive for coherent beliefs if you don’t want to be Dutch-booked
It’s important to understand that in this frame, it is a normative theory. Bayes theorem in this perspective is not some sort of “minor aid for doing some sort of likelihood calculation” but a formal foundation for large part of epistemology.
The view that you believe different things to different degrees, and these credences basically are Bayesian probabilities and are normatively governed by the same theory isn’t an ‘EA’ or ‘rationalist’ thing but a standard Bayesian take. (cf Probability Theory: The Logic of Science, E. T. Jaynes)
Part of what Eliezer’s approach to ‘applied rationality’ aimed for is taking Bayesian epistemology seriously and applying this frame to improve everyday reasoning.
But this is almostnever done by converting your implicit probability distributions to numerical credences , doing the explicit numerical math, and blindly trusting the result!
What’s done instead is -noticing that your brain already internally used credences and probabilities all the time. you can easily access your “internal” (/S1/...) probabilities in an intuitive way by asking yourself questions like “how surprised you would be to see [a pink car today| Donald Trump reelection | SpaceX rocket landing in your backyard]” (the idea that brains do this is pretty mainstream eg Confidence as Bayesian Probability: From Neural Origins to Behavior, Meyniel et.al.) - noticing you brain often clearly does something a bit similar to what Bayesians suggests as the normative idea - e.g. if two SpaceX rockets already landed in your backyard today, you would be way less surprised by the third one - noticing there is often a disconnect between this intuitive / internal / informal calculation, and the explicit, verbal reasoning (cf alief/belief) - …and using all of that to improve both the “implicit” and “explicit” reasoning!
The actual ‘techniques’ derived from this are often implicit. For example, one actual technique is: Imagine you are an alien who landed in one world of two. They differ in the respect that in one a proposition is true, in the other, the opposite is true. You ask yourself how the world would look like in the different worlds, and then look at the actual world.
For example, consider this proposition: “democratic vote is the optimal way how to make decision making in organizations”: How would the world where this is true look like? There are parts of the world with intense competition between organizations, e.g. companies in highly competitive industries, optimizing hard and measurable things. In the world where the proposition is true, I’d expect a lot of voting in these companies. We don’t see that, which decreases my credence in the proposition.
It is relatively easy to see how this both connected to Bayes and not asking people to do any explicit odd multiplications.
There is nothing wrong with quantitative thinking, and much of the power of EA grows from its dedication to the numerical. However, this is often taken to the extreme, where people try to think almost exclusively along numerical lines, causing them to neglect important qualitative factors or else attempt to replace them with doubtful or even meaningless numbers because “something is better than nothing”. These numbers are often subjective “best guesses” with little empirical basis.[27]
While some people make the error of trying to replace complex implicit calculations by the over-simplified spreadsheets with explicit numbers, this paragraph seems to conflate multiple things together as “numerical” or “quantitative”.
Assuming fairly standard standard cognitive science and neuroscience, at some level all thinking is “numerical”, including thinking which feels intuitive or qualitative. People usually express such thinking in words like “I strongly feel” or “I’m pretty confident”
What’s a classical rationalist move in such cases is to try to make the implicit explicit. E.g., if you are fairly confident, at what odds would you be willing to bet on it?
When done correctly, the ability and willingness to do that mostly exposes what’s already there. People already act based on the implicit credences and likelihoods, even if they don’t try to express them as probability distributions, and you don’t have access to them.
E.g., when some famous experts recommended ‘herd immunity’ strategy to deal with covid, using strong and confident words, such recommendation actually were subjective “best guess” with little empirical basis. Same is actually true for many expert opinions on policy topics!
Rationalist habit of reporting credences and predictions using numbers … basically exposes many things to possibility of being proved wrong, and exposes many personal best guesses for what they are.
Yes, for someone who isn’t used to this at all this may create fake aura of ‘certainty’, because use of numbers often signals ‘this is more clear’ and use of words signals ‘this is more slippery’ in common communication. But this is just a communication protocol.
Yes, as I wrote before, some people may make the mistake of trying to convert some basic things to numbers and replace their brains with spreadsheets with Bayes formulas in the next step, but it does not seem common at least in my social neighborhood.
For instance, Bayesian estimates are heavily influenced by one’s initial figure (one’s “prior”), which, especially when dealing with complex, poorly-defined, and highly uncertain and speculative phenomena, can become subjective (based on unspecified values, worldviews, and assumptions) to the point of arbitrary.[28] This is particularly true in existential risk studies where one may not have good evidence to update on.
I would be curious how the authors imagine the non-Bayesian thinking not depending on any priors internally works.
We assume that, with enough updating in response to evidence, our estimates will eventually converge on an accurate figure. However, this is dependent on several conditions, notably well-formulated questions, representative sampling of (accurate) evidence, and a rigorous and consistent method of translating real-world observations into conditional likelihoods.[29] This process is very difficult even when performed as part of careful and rigorous scientific study; attempting to do it all in your head, using rough-guess or even purely intuitional priors and likelihoods, is likely to lead to more confidence than accuracy.
This seems confused (the “common response” mentioned below applies here exactly). How do you imagine, for example, a group of people looking at a tree, manages to agree on seeing a tree? The process of converting raw sensory data to the tree-hypothesis is way more complicated than a typical careful and rigorous scientific study, and also way more reliable than a typical published scientific study.
Again: correctly understood, the applied rationalist idea is not to replace our mind’s natural ways of recognizing a tree by a process where you would assign numbers to statements like “green in upper left part of visual field” and do explicit Bayesian calculation in S2 way, but just to be …less wrong.
This is further complicated by the fact that probabilities are typically distributions rather than point values – often very messy distributions that we don’t have nice neat formulae for. Thus, “updating” properly would involve manipulating big and/or ugly matrices in your head. Perhaps this is possible for some people.
A common response to these arguments is that Bayesianism is “how the mind really works”, and that the brain already assigns probabilities to hypotheses and updates them similarly or identically to Bayes’ rule. There are good reasons to believe that this may be true. However, the fact that we may intuitively and subconsciously work along Bayesian lines does not mean that our attempts to consciously “do the maths” will work.
I think the “common response” is partially misunderstood here? The common response does not imply you can consciously explicitly multiply the large matrices or do the exact Bayesian inferences, any more than someone a catching a ball would be consciously and explicitly solving the equations of motion.
The correct ideas here are: - you can often make some parts or results of the implicit subconscious calculations explicit and numeric (cf forecasting, betting, …) - the implicit reasoning is often biased and influenced by wishes and wants - explicitly stating things or betting on things sometimes exposes the problems - explicit reasoning can be good for that - explicit reasoning is also good for understanding what the normatively good move is in simple or idealized cases - on the other hand, explicit reasoning alone is computationally underpowered for almost anything beyond very simple models. (compare how many FLOPs is your brain using, vs. how fast you can explicitly multiply numbers) - what you usually need to do is use both, and watch for flaws
In addition, there seems to have been little empirical study of whether Individual Bayesian Updating actually outperforms other modes of thought, never mind how this varies by domain. It seems risky to put so much confidence in a relatively unproven technique.
Personally I don’t know anyone who would propose people should do the “Individual Bayesian Thinking” mode of thoughts in the way you describe, and I don’t see much reason to make a study on this. Also while a lot people in EA orgs subscribe to basically Bayesian epistemology, I don’t know anyone who would try to live by the “IBT”, so you should probably be less worried about the risks from the use of it.
The process of Individual Bayesian Updating can thus be critiqued on scientific grounds,
So, to me, this is characteristic—and, frankly, annoying—about the whole text. I don’t think you have properly engaged with Bayesian epistemology, state of art applied rationality practice, or relevant cognitive science. “critiqued on scientific grounds” sounds serious and authoritative … but where is the science?
but there is also another issue with it and hyper-quantitative thinking more generally: motivated reasoning. With no hard qualitative boundaries and little constraining empirical data, the combination of expected value calculations and Individual Bayesian Thinking in EA allows one to justify and/or rationalise essentially anythingby generating suitable numbers.
This is both sad and funny. One of the good things about rationalist habits and techniques is, stating explicit numbers often allows one to spot and correct motivated reasoning. In relation to existential risk and similar domains, often the hope is that by practicing this in domains with good feedback and bets which are possible to empirically evaluate, you get better at thinking clearly… and this will at least partially generalize to epistemically more challenging domains.
Yes, you can overdo it, or do stupid or straw versions of this. Yes, it is not perfect.
But what’s the alternative? Honestly, in my view, in many areas of expertise the alternative is to state views, claims and predictions in sufficiently slippery and non-quantitative way that it is very difficult to clearly disprove them.
Take for example your text and claims about diversity. I think given the way you are using it, it seems anyone trying to refute the advise on empirical grounds would have really hard time, and you would be always able to write some story why some dimension of diversity is not important, or why some other piece of research states something else. (It seems a common occurrence in humanities that some confused ideas basically never die, unless they lose support on the level of ‘sociology of science’.)
Bottom line: - these 8 paragraphs did not convince me about any mistake people at e.g. FHI may be making - suggestion “Bayes’ theorem should be applied where it works” is pretty funny; I guess Bayesians wholeheartedly agree with this! - suggestions like “studies of circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought” seems irrelevant given the lack of understanding of actual ‘rationality techniques’ - we have real world evidence that getting better at some of the traditional “rationalist” skills makes you better at least at some measurable things, e.g. forecasting
I suspect … that even what I see as a wrong model of ‘erros of EA’ maybe points to some interesting evidence. For example maybe some EA community builders are actually teaching “individual bayesian thinking” as a technique you should do in the way described?
Thanks for all the care and effort which went into writing this!
At the same time, while reading, my reactions were most of the time “this seems a bit confused”, “this likely won’t help” or “this seems to miss the fact that there is someone somewhere close to the core EA orgs who understands the topic pretty well, and has a different opinion”.
Unfortunately, to illustrate this in detail for the whole post would be a project for …multiple weeks.
At the same time I thought it could be useful to discuss at least one small part in detail, to illustrate how the actual in-the-detail disagreement could look like.
I’ve decided to write a detailed response for a few paragraphs about rationality and Bayesianism. This is from my perspective not cherry-picked part of the of the original text which is particularly wrong, but a part which seems representatively wrong/confused. I picked it for convenience, because I can argue and reference this particularly easily.
This seems pretty strange characterization. Even though I participated at multiple CFAR events, do teach various ‘rationality techniques’ and know decent amount of stuff about Bayesian inference, I think this is misleading/confused.
What’s called in rationalist circles “Bayesian epistemology” is basically what’s the common understanding of the term:
- you don’t hold beliefs to be true or false, but have credences in them
- normatively, you should update the credences based on evidence. normatively, the proper rule for that is the Bayes rule; this is intractable in practice, so you do various types of approximations
- you should strive for coherent beliefs if you don’t want to be Dutch-booked
It’s important to understand that in this frame, it is a normative theory. Bayes theorem in this perspective is not some sort of “minor aid for doing some sort of likelihood calculation” but a formal foundation for large part of epistemology.
The view that you believe different things to different degrees, and these credences basically are Bayesian probabilities and are normatively governed by the same theory isn’t an ‘EA’ or ‘rationalist’ thing but a standard Bayesian take. (cf Probability Theory: The Logic of Science, E. T. Jaynes)
Part of what Eliezer’s approach to ‘applied rationality’ aimed for is taking Bayesian epistemology seriously and applying this frame to improve everyday reasoning.
But this is almost never done by converting your implicit probability distributions to numerical credences , doing the explicit numerical math, and blindly trusting the result!
What’s done instead is
-noticing that your brain already internally used credences and probabilities all the time. you can easily access your “internal” (/S1/...) probabilities in an intuitive way by asking yourself questions like “how surprised you would be to see [a pink car today| Donald Trump reelection | SpaceX rocket landing in your backyard]” (the idea that brains do this is pretty mainstream eg Confidence as Bayesian Probability: From Neural Origins to Behavior, Meyniel et.al.)
- noticing you brain often clearly does something a bit similar to what Bayesians suggests as the normative idea - e.g. if two SpaceX rockets already landed in your backyard today, you would be way less surprised by the third one
- noticing there is often a disconnect between this intuitive / internal / informal calculation, and the explicit, verbal reasoning (cf alief/belief)
- …and using all of that to improve both the “implicit” and “explicit” reasoning!
The actual ‘techniques’ derived from this are often implicit. For example, one actual technique is: Imagine you are an alien who landed in one world of two. They differ in the respect that in one a proposition is true, in the other, the opposite is true. You ask yourself how the world would look like in the different worlds, and then look at the actual world.
For example, consider this proposition: “democratic vote is the optimal way how to make decision making in organizations”: How would the world where this is true look like? There are parts of the world with intense competition between organizations, e.g. companies in highly competitive industries, optimizing hard and measurable things. In the world where the proposition is true, I’d expect a lot of voting in these companies. We don’t see that, which decreases my credence in the proposition.
It is relatively easy to see how this both connected to Bayes and not asking people to do any explicit odd multiplications.
While some people make the error of trying to replace complex implicit calculations by the over-simplified spreadsheets with explicit numbers, this paragraph seems to conflate multiple things together as “numerical” or “quantitative”.
Assuming fairly standard standard cognitive science and neuroscience, at some level all thinking is “numerical”, including thinking which feels intuitive or qualitative. People usually express such thinking in words like “I strongly feel” or “I’m pretty confident”
What’s a classical rationalist move in such cases is to try to make the implicit explicit. E.g., if you are fairly confident, at what odds would you be willing to bet on it?
When done correctly, the ability and willingness to do that mostly exposes what’s already there. People already act based on the implicit credences and likelihoods, even if they don’t try to express them as probability distributions, and you don’t have access to them.
E.g., when some famous experts recommended ‘herd immunity’ strategy to deal with covid, using strong and confident words, such recommendation actually were subjective “best guess” with little empirical basis. Same is actually true for many expert opinions on policy topics!
Rationalist habit of reporting credences and predictions using numbers … basically exposes many things to possibility of being proved wrong, and exposes many personal best guesses for what they are.
Yes, for someone who isn’t used to this at all this may create fake aura of ‘certainty’, because use of numbers often signals ‘this is more clear’ and use of words signals ‘this is more slippery’ in common communication. But this is just a communication protocol.
Yes, as I wrote before, some people may make the mistake of trying to convert some basic things to numbers and replace their brains with spreadsheets with Bayes formulas in the next step, but it does not seem common at least in my social neighborhood.
I would be curious how the authors imagine the non-Bayesian thinking not depending on any priors internally works.
This seems confused (the “common response” mentioned below applies here exactly). How do you imagine, for example, a group of people looking at a tree, manages to agree on seeing a tree? The process of converting raw sensory data to the tree-hypothesis is way more complicated than a typical careful and rigorous scientific study, and also way more reliable than a typical published scientific study.
Again: correctly understood, the applied rationalist idea is not to replace our mind’s natural ways of recognizing a tree by a process where you would assign numbers to statements like “green in upper left part of visual field” and do explicit Bayesian calculation in S2 way, but just to be …less wrong.
I think the “common response” is partially misunderstood here? The common response does not imply you can consciously explicitly multiply the large matrices or do the exact Bayesian inferences, any more than someone a catching a ball would be consciously and explicitly solving the equations of motion.
The correct ideas here are:
- you can often make some parts or results of the implicit subconscious calculations explicit and numeric (cf forecasting, betting, …)
- the implicit reasoning is often biased and influenced by wishes and wants
- explicitly stating things or betting on things sometimes exposes the problems
- explicit reasoning can be good for that
- explicit reasoning is also good for understanding what the normatively good move is in simple or idealized cases
- on the other hand, explicit reasoning alone is computationally underpowered for almost anything beyond very simple models. (compare how many FLOPs is your brain using, vs. how fast you can explicitly multiply numbers)
- what you usually need to do is use both, and watch for flaws
Personally I don’t know anyone who would propose people should do the “Individual Bayesian Thinking” mode of thoughts in the way you describe, and I don’t see much reason to make a study on this. Also while a lot people in EA orgs subscribe to basically Bayesian epistemology, I don’t know anyone who would try to live by the “IBT”, so you should probably be less worried about the risks from the use of it.
So, to me, this is characteristic—and, frankly, annoying—about the whole text. I don’t think you have properly engaged with Bayesian epistemology, state of art applied rationality practice, or relevant cognitive science. “critiqued on scientific grounds” sounds serious and authoritative … but where is the science?
This is both sad and funny. One of the good things about rationalist habits and techniques is, stating explicit numbers often allows one to spot and correct motivated reasoning. In relation to existential risk and similar domains, often the hope is that by practicing this in domains with good feedback and bets which are possible to empirically evaluate, you get better at thinking clearly… and this will at least partially generalize to epistemically more challenging domains.
Yes, you can overdo it, or do stupid or straw versions of this. Yes, it is not perfect.
But what’s the alternative? Honestly, in my view, in many areas of expertise the alternative is to state views, claims and predictions in sufficiently slippery and non-quantitative way that it is very difficult to clearly disprove them.
Take for example your text and claims about diversity. I think given the way you are using it, it seems anyone trying to refute the advise on empirical grounds would have really hard time, and you would be always able to write some story why some dimension of diversity is not important, or why some other piece of research states something else. (It seems a common occurrence in humanities that some confused ideas basically never die, unless they lose support on the level of ‘sociology of science’.)
Bottom line:
- these 8 paragraphs did not convince me about any mistake people at e.g. FHI may be making
- suggestion “Bayes’ theorem should be applied where it works” is pretty funny; I guess Bayesians wholeheartedly agree with this!
- suggestions like “studies of circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought” seems irrelevant given the lack of understanding of actual ‘rationality techniques’
- we have real world evidence that getting better at some of the traditional “rationalist” skills makes you better at least at some measurable things, e.g. forecasting
I suspect … that even what I see as a wrong model of ‘erros of EA’ maybe points to some interesting evidence. For example maybe some EA community builders are actually teaching “individual bayesian thinking” as a technique you should do in the way described?