Hey Ben, thanks a lot for posting this! And props for having the energy to respond to all these comments :)
I’ll try to reframe points that others have made in the comments (and which I tried to make earlier, but less well): I suspect that part of why these conversations sometimes feel like we’re talking past one another is that we’re focusing on different things.
You and Vaden seem focused on creating knowledge. You (I’d say) correctly note that, as frameworks for creating knowledge, EV maximization and Bayesian epistemology aren’t just useless—they’re actively harmful, because they distract us from the empirical studies, data analysis, feedback loops, and argumentative criticism that actually create knowledge.
Some others are focused on making decisions. From this angle, EV maximization and Bayesian epistemology aren’t supposed to be frameworks for creating knowledge—they’re frameworks for turning knowledge into decisions, and your arguments don’t seem to be enough for refuting them as such.
To back up a bit, I think probabilities aren’t fundamental to decision making. But bets are. Every decision we make is effectively taking or refusing to take a bet (e.g. going outside is betting that I won’t be hit in the head by a meteor if I go outside). So it’s pretty useful to have a good answer to the question: “What bets should I take?”
In this context, your post isn’t convincing me because I don’t see a good alternative to the EV approach to making bets, and because maybe there can’t be a good alternative.
1.
One of the questions your post leaves me with is: What kinds of bets do you think I should I take, when I’m uncertain about what will happen? i.e. How do you think I should make decisions under uncertainty?
Maximizing EV under a Bayesian framework offers one answer, as you know, roughly that: we should be willing to bet on X happening in proportion to our best guess about the strength of the evidence for the claim that X will happen.
I think you’re right in pointing that this approach has significant weaknesses: it has counterintuitive results when used with some very low probabilities, it’s very sensitive to arbitrary judgements and bias, and our best guesses about whether far-future events will happen might be totally uncorrelated with whether they actually happen. (I’m not as compelled by some of your other criticisms, largely for reasons others’ comments discuss.)
Despite these downsides, it seems like a bad idea to drop the EV approach to “what kinds of bets should I take?” without a better answer. (Vaden offers a promising approach to making decisions, but it just passes the buck on this—we’ll still need an answer to my question when we get to his step 2.) As your familiarity with catastrophic dictatorships suggests, dumping a flawed status quo is a mistake if we don’t have a better alternative.
Another worry is that probabilities are so useful that we won’t find a better alternative.
I think of probabilities as language for answering the earlier basic question of “What bets should I make?” For example, “There’s a 25% chance (i.e. 1:3 odds) that X will happen” is (as I see it) shorthand for “My potential payoff better be at least 3 times bigger than my potential loss for betting on X to be worth it.” So probabilities express thresholds in your answers to the question “What bets on event X should I take?” That is, from a pragmatic angle, subjective probabilities aren’t supposed to be deep truths about the world; they’re expressions of our best guesses about how willing we should be to bet on various events. (Other considerations also make probabilities particularly well-fitting tools for describing our preferences about bets.)
So rejecting the use of probabilities (as I understand them) under severe uncertainty seems to have an unacceptable, maybe even absurd, conclusion: the rejection of consistent thresholds for deciding whether to bet on uncertain events. This is a mistake—if we accept/reject bets on some event without a consistent threshold for what reward:loss ratios are worth taking, then we’ll necessarily be doing silly things like refusing to take a bet, and then accepting a bet on the same event for a less favorable reward:loss ratio.
You might be thinking something like “ok, so you can always describe an action as endorsing some betting threshold, but that doesn’t mean it’s useful to think about this explicitly.” I’d disagree, because not recognizing our betting threshold makes it harder to notice and avoid mistakes like the one above. It also takes away clarity and precision of thought that’s helpful for criticizing our choices, e.g. it makes an extremely high betting threshold about the value of x-risk reduction look like agnosticism.
Some others are focused on making decisions. From this angle, EV maximization and Bayesian epistemology were never supposed to be frameworks for creating knowledge—they’re frameworks for turning knowledge into decisions, and your arguments don’t seem to be enough for refuting them as such.
Yes agreed, but these two things become intertwined when a philosophy makes people decide to stop creating knowledge. In this case, it’s longtermism preventing the creation of moral and scientific knowledge by grinding the process of error correction to a halt, where “error correction” in this context means continuously reevaluating philanthropic organizations based on their near and medium term consequences, in order to compare results obtained against results expected.
Both approaches pass on the buck, that’s why I defined ‘creativity’ here to mean: ‘whatever unknown software the brain is running to get out of the infinite regress problem.’ And one doesn’t necessarily need to answer your question, because there’s no requirement that the criticism take EV form (although it can).
these two things become intertwined when a philosophy makes people decide to stop creating knowledge
Yeah, fair. (Although less relevant to less naive applications of this philosophy, which as Ben puts it draw some rather than all of our attention away from knowledge creation.)
Both approaches pass on the buck
I’m not sure I see where you’re coming from here. EV does pass the buck on plenty of things (on how to generate options, utilities, probabilities), but as I put it, I thought it directly answered the question (rather than passing the buck) about what kinds of bets to make/how to act under uncertainty:
we should be willing to bet on X happening in proportion to our best guess about the strength of the evidence for the claim that X will happen.
Also, regarding this:
And one doesn’t necessarily need to answer your question, because there’s no requirement that the criticism take EV form
I don’t see how that gets you out of facing the question. If criticism uses premises about how we should act under uncertainty (which it must do, to have bearing on our choices), then a discussion will remain badly unfinished until it’s scrutinized those premises. We could scrutinize them on a case-by-case basis, but that’s wasting time if some kinds of premises can be refuted in general.
Hey Ben, thanks a lot for posting this! And props for having the energy to respond to all these comments :)
I’ll try to reframe points that others have made in the comments (and which I tried to make earlier, but less well): I suspect that part of why these conversations sometimes feel like we’re talking past one another is that we’re focusing on different things.
You and Vaden seem focused on creating knowledge. You (I’d say) correctly note that, as frameworks for creating knowledge, EV maximization and Bayesian epistemology aren’t just useless—they’re actively harmful, because they distract us from the empirical studies, data analysis, feedback loops, and argumentative criticism that actually create knowledge.
Some others are focused on making decisions. From this angle, EV maximization and Bayesian epistemology aren’t supposed to be frameworks for creating knowledge—they’re frameworks for turning knowledge into decisions, and your arguments don’t seem to be enough for refuting them as such.
To back up a bit, I think probabilities aren’t fundamental to decision making. But bets are. Every decision we make is effectively taking or refusing to take a bet (e.g. going outside is betting that I won’t be hit in the head by a meteor if I go outside). So it’s pretty useful to have a good answer to the question: “What bets should I take?”
In this context, your post isn’t convincing me because I don’t see a good alternative to the EV approach to making bets, and because maybe there can’t be a good alternative.
1.
One of the questions your post leaves me with is: What kinds of bets do you think I should I take, when I’m uncertain about what will happen? i.e. How do you think I should make decisions under uncertainty?
Maximizing EV under a Bayesian framework offers one answer, as you know, roughly that: we should be willing to bet on X happening in proportion to our best guess about the strength of the evidence for the claim that X will happen.
I think you’re right in pointing that this approach has significant weaknesses: it has counterintuitive results when used with some very low probabilities, it’s very sensitive to arbitrary judgements and bias, and our best guesses about whether far-future events will happen might be totally uncorrelated with whether they actually happen. (I’m not as compelled by some of your other criticisms, largely for reasons others’ comments discuss.)
Despite these downsides, it seems like a bad idea to drop the EV approach to “what kinds of bets should I take?” without a better answer. (Vaden offers a promising approach to making decisions, but it just passes the buck on this—we’ll still need an answer to my question when we get to his step 2.) As your familiarity with catastrophic dictatorships suggests, dumping a flawed status quo is a mistake if we don’t have a better alternative.
2.
Another worry is that probabilities are so useful that we won’t find a better alternative.
I think of probabilities as language for answering the earlier basic question of “What bets should I make?” For example, “There’s a 25% chance (i.e. 1:3 odds) that X will happen” is (as I see it) shorthand for “My potential payoff better be at least 3 times bigger than my potential loss for betting on X to be worth it.” So probabilities express thresholds in your answers to the question “What bets on event X should I take?” That is, from a pragmatic angle, subjective probabilities aren’t supposed to be deep truths about the world; they’re expressions of our best guesses about how willing we should be to bet on various events. (Other considerations also make probabilities particularly well-fitting tools for describing our preferences about bets.)
So rejecting the use of probabilities (as I understand them) under severe uncertainty seems to have an unacceptable, maybe even absurd, conclusion: the rejection of consistent thresholds for deciding whether to bet on uncertain events. This is a mistake—if we accept/reject bets on some event without a consistent threshold for what reward:loss ratios are worth taking, then we’ll necessarily be doing silly things like refusing to take a bet, and then accepting a bet on the same event for a less favorable reward:loss ratio.
You might be thinking something like “ok, so you can always describe an action as endorsing some betting threshold, but that doesn’t mean it’s useful to think about this explicitly.” I’d disagree, because not recognizing our betting threshold makes it harder to notice and avoid mistakes like the one above. It also takes away clarity and precision of thought that’s helpful for criticizing our choices, e.g. it makes an extremely high betting threshold about the value of x-risk reduction look like agnosticism.
Thanks again for your thoughtful post!
Hey Mauricio! Two brief comments -
Yes agreed, but these two things become intertwined when a philosophy makes people decide to stop creating knowledge. In this case, it’s longtermism preventing the creation of moral and scientific knowledge by grinding the process of error correction to a halt, where “error correction” in this context means continuously reevaluating philanthropic organizations based on their near and medium term consequences, in order to compare results obtained against results expected.
Both approaches pass on the buck, that’s why I defined ‘creativity’ here to mean: ‘whatever unknown software the brain is running to get out of the infinite regress problem.’ And one doesn’t necessarily need to answer your question, because there’s no requirement that the criticism take EV form (although it can).
Hey Vaden, thanks!
Yeah, fair. (Although less relevant to less naive applications of this philosophy, which as Ben puts it draw some rather than all of our attention away from knowledge creation.)
I’m not sure I see where you’re coming from here. EV does pass the buck on plenty of things (on how to generate options, utilities, probabilities), but as I put it, I thought it directly answered the question (rather than passing the buck) about what kinds of bets to make/how to act under uncertainty:
Also, regarding this:
I don’t see how that gets you out of facing the question. If criticism uses premises about how we should act under uncertainty (which it must do, to have bearing on our choices), then a discussion will remain badly unfinished until it’s scrutinized those premises. We could scrutinize them on a case-by-case basis, but that’s wasting time if some kinds of premises can be refuted in general.
Check out chapter 13 in Beginning of Infinity when you can—everything I was saying in that post is much better explained there :)