Bayesianism is largely about how to assign probabilities to things, it is not a ethical/normative doctrine like utilitarianism that tells you how you should prioritize your time. And as a (non-naïve) utilitarian will emphasize, when doing so-called “utilitarian calculus” (and related forms of analysis) is inefficient/less effective than using intuition, then you should rely on intuition.
Especially when dealing with facially implausible/far-fetched claims about extremely high risk, I think it’s helpful to fight dubious fire with similarly dubious fire and then trim off the ashes: if someone says “there’s a slight (0.001%) chance that this (weird/dubious) intervention Y could prevent extinction, and that’s extremely important,” you might be able to argue that it is equally or even more likely that doing Y backfires or that doing Y prevents you from doing intervention Z which plausibly has a similar (unlikely) chance of preventing extinction. (See longer illustration block below)
In the end, these two points are not the only things to consider, but I think they tend to be the most neglected/overlooked whereas the complementary concepts are decently understood (although I might be forgetting something else).
Regarding 2 in more detail: Take for example classic Pascal’s mugging-type situations, like “A strange-looking man in a suit walks up to you and says that he will warp up to his spaceship and detonate a super-mega nuke that will eradicate all life on earth if and only if you do not give him $50 (which you have in your wallet), but he will give you $3^^^3 tomorrow if and only if you give him $50.” We could technically/formally suppose the chance he is being truthful is nonzero (e.g., 0.0000000001%), but still abide by rational expectation theory if you suppose that there are indistinguishably likely cases that cause the opposite expected value—for example, the possibility that he is telling you the exact opposite of what he will do if you give him the money (for comparison, see the philosopher God response to Pascal’s wager), or the possibility that the “true” mega-punisher/rewarder is actually just a block down the street and if you give your money to this random lunatic you won’t have the $50 to give to the true one (for comparison, see the “other religions” response to the narrow/Christianity-specific Pascal’s wager). More realistically, that $50 might be better donated to an X-risk charity. Add in the fact that stopping and thinking through this entire situation would be a waste of time that you could perhaps be using to help avert catastrophes in some other way (e.g., making money to donate to X-risk charities), and you’ve got a pretty strong case for not even entertaining the fantasy for a few seconds, and thus not getting paralyzed by naive application of expected value theory.
Thanks for your reply. A follow-up question: when I see the ‘cancelling out’-argument, I always wonder why it doesn’t apply to the x-risk case itself. It seems to me that you could just as easily argue that halting biotech research in order to enter the Long Reflection might backfire in some unpredictable way, or that aiming at Bostrom’s utopia would ruin the chances of ending up in a vastly better state that we had never even dreamt of—and so on and so forth.
Isn’t the whole case for longtermism so empirically uncertain as to be open to the ‘cancelling out’-argument as well?
I do understand what you are saying, but my response (albeit as someone who is not steeped in longtermist/X-risk thought) would be “not necessarily (and almost certainly not entirely).”The tl;dr version is “there are lots of claims about X-risks and interventions to reduce x-risks that are reasonably more plausible than their reverse-claim.” e.g., there are decent reasons to believe that certain forms of pandemic preparations reduce x-risk more than they increase x-risk. I can’t (yet) give full, formalistic rules for how I apply the trimming heuristic, but some of the major points are discussed in the blocks below.
One key to using/understanding the trimming heuristic is that it is not meant to directly maximize the accuracy of your beliefs, rather it’s meant to improve the effectiveness of your overall decision-making *in light of constraints on your time/cognitive resources. * If we had infinite time to evaluate everything—even possibilities that seem like red herrings—it would probably (usually) be optimal to do so, but we don’t have infinite time so we have to make decisions as to what to spend our time analyzing and what to accept as “best-guesstimates” for particularly fuzzy questions. Here, intuition (including “when should we rely on various levels of intuition/analysis”) can be far more effective than formalistic rules.
I think another key is to understand the distinction between risk and uncertainty: (to heavily simplify) risk refers to confidently verifiable/specific probabilities (e.g., a 1⁄20 chance of rolling a 1 on a standard 20-sided die) whereas uncertainty refers to when we don’t confidently know the specific degree of risk (e.g., the chance of rolling a 1 on a confusingly-shaped 20-sided die which has never rolled a 1 yet, but perhaps might eventually).
In the end, I think my 3-4-ish conditions or at least factors for using the trimming heuristic are:
There is a high degree of uncertainty associated with the claim (e.g., it is not a well-established fact that there is a +0.01% chance of extinction upon enacting this policy)
The claim seems rather implausible/exaggerated on its face, but would require a non-trivial amount of time to clearly explain why (since it gets increasingly difficult to show why you ought to increase the number of zeros after a decimal point)
You can quickly fight fire with fire (e.g., think of opposite-outcome claims like I described)
There are other, more-realistic arguments to consider and your time is limited.
In short:
Bayesianism is largely about how to assign probabilities to things, it is not a ethical/normative doctrine like utilitarianism that tells you how you should prioritize your time. And as a (non-naïve) utilitarian will emphasize, when doing so-called “utilitarian calculus” (and related forms of analysis) is inefficient/less effective than using intuition, then you should rely on intuition.
Especially when dealing with facially implausible/far-fetched claims about extremely high risk, I think it’s helpful to fight dubious fire with similarly dubious fire and then trim off the ashes: if someone says “there’s a slight (0.001%) chance that this (weird/dubious) intervention Y could prevent extinction, and that’s extremely important,” you might be able to argue that it is equally or even more likely that doing Y backfires or that doing Y prevents you from doing intervention Z which plausibly has a similar (unlikely) chance of preventing extinction. (See longer illustration block below)
In the end, these two points are not the only things to consider, but I think they tend to be the most neglected/overlooked whereas the complementary concepts are decently understood (although I might be forgetting something else).
Regarding 2 in more detail: Take for example classic Pascal’s mugging-type situations, like “A strange-looking man in a suit walks up to you and says that he will warp up to his spaceship and detonate a super-mega nuke that will eradicate all life on earth if and only if you do not give him $50 (which you have in your wallet), but he will give you $3^^^3 tomorrow if and only if you give him $50.” We could technically/formally suppose the chance he is being truthful is nonzero (e.g., 0.0000000001%), but still abide by rational expectation theory if you suppose that there are indistinguishably likely cases that cause the opposite expected value—for example, the possibility that he is telling you the exact opposite of what he will do if you give him the money (for comparison, see the philosopher God response to Pascal’s wager), or the possibility that the “true” mega-punisher/rewarder is actually just a block down the street and if you give your money to this random lunatic you won’t have the $50 to give to the true one (for comparison, see the “other religions” response to the narrow/Christianity-specific Pascal’s wager). More realistically, that $50 might be better donated to an X-risk charity. Add in the fact that stopping and thinking through this entire situation would be a waste of time that you could perhaps be using to help avert catastrophes in some other way (e.g., making money to donate to X-risk charities), and you’ve got a pretty strong case for not even entertaining the fantasy for a few seconds, and thus not getting paralyzed by naive application of expected value theory.
Thanks for your reply. A follow-up question: when I see the ‘cancelling out’-argument, I always wonder why it doesn’t apply to the x-risk case itself. It seems to me that you could just as easily argue that halting biotech research in order to enter the Long Reflection might backfire in some unpredictable way, or that aiming at Bostrom’s utopia would ruin the chances of ending up in a vastly better state that we had never even dreamt of—and so on and so forth.
Isn’t the whole case for longtermism so empirically uncertain as to be open to the ‘cancelling out’-argument as well?
Hope it makes sense what I’m saying.
I do understand what you are saying, but my response (albeit as someone who is not steeped in longtermist/X-risk thought) would be “not necessarily (and almost certainly not entirely).”The tl;dr version is “there are lots of claims about X-risks and interventions to reduce x-risks that are reasonably more plausible than their reverse-claim.” e.g., there are decent reasons to believe that certain forms of pandemic preparations reduce x-risk more than they increase x-risk. I can’t (yet) give full, formalistic rules for how I apply the trimming heuristic, but some of the major points are discussed in the blocks below.
One key to using/understanding the trimming heuristic is that it is not meant to directly maximize the accuracy of your beliefs, rather it’s meant to improve the effectiveness of your overall decision-making *in light of constraints on your time/cognitive resources. * If we had infinite time to evaluate everything—even possibilities that seem like red herrings—it would probably (usually) be optimal to do so, but we don’t have infinite time so we have to make decisions as to what to spend our time analyzing and what to accept as “best-guesstimates” for particularly fuzzy questions. Here, intuition (including “when should we rely on various levels of intuition/analysis”) can be far more effective than formalistic rules.
I think another key is to understand the distinction between risk and uncertainty: (to heavily simplify) risk refers to confidently verifiable/specific probabilities (e.g., a 1⁄20 chance of rolling a 1 on a standard 20-sided die) whereas uncertainty refers to when we don’t confidently know the specific degree of risk (e.g., the chance of rolling a 1 on a confusingly-shaped 20-sided die which has never rolled a 1 yet, but perhaps might eventually).
In the end, I think my 3-4-ish conditions or at least factors for using the trimming heuristic are:
There is a high degree of uncertainty associated with the claim (e.g., it is not a well-established fact that there is a +0.01% chance of extinction upon enacting this policy)
The claim seems rather implausible/exaggerated on its face, but would require a non-trivial amount of time to clearly explain why (since it gets increasingly difficult to show why you ought to increase the number of zeros after a decimal point)
You can quickly fight fire with fire (e.g., think of opposite-outcome claims like I described)
There are other, more-realistic arguments to consider and your time is limited.