This post is part of Rethink Priorities’ Worldview Investigations Team’s CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else.
Introduction
RP has committed itself to doing good. Given the limits of our knowledge and abilities, we won’t do this perfectly but we can do this in a principled manner. There are better and worse ways to work toward our goal. In this post, we discuss some of the practical steps that we’re taking to navigate uncertainty, improve our reasoning transparency, and make better decisions. In particular, we want to flag the value of three changes we intend to make:
Incorporating multiple decision theories into Rethink Priorities’ modeling
More rigorously quantifying the value of different courses of action
Adopting transparent decision-making processes
Using Multiple Decision Theories
Decision theories are frameworks that help us evaluate and make choices under uncertainty about how to act.[1] Should you work on something that has a 20% chance of success and a pretty good outcome if success is achieved, or work on something that has a 90% chance of success but only a weakly positive outcome if achieved? Expected value theory is the typical choice to answer that type of question. It calculates the expected value (EV) of each action by multiplying the value of each possible outcome by its probability and summing the results, recommending the action with the highest expected value. But because low probabilities can always be offset by corresponding increases in the value of outcomes, traditional expected value theory is vulnerable to the charge of fanaticism, “risking arbitrarily great gains at arbitrarily long odds for the sake of enormous potential” (Beckstead and Thomas, 2021). Put differently, it seems to recommend spending all of our efforts on actions that, predictably, won’t achieve our ends.
Alternative decision theories have significant drawbacks of their own, giving up one plausible axiom or another. The simple alternative is expected value maximization but with very small probabilities rounded down to zero. This gives up the axiom of continuity, which suggests for a relation of propositions A ≥ B ≥ C, that there exists some probability that would make you indifferent between B and a probabilistic combination of A and C. This violation causes some weird outcomes where, say, believing the chance of something is 1 in 100,000,000,000 can mean an action gets no weight but believing it’s 1.0000001 in 100,000,000,000 means that the option dominates your considerations if the expected value upon success is high enough, which is a kind of attenuated fanaticism. There are also other problems like setting the threshold for where you should round down.[2]
Alternatively, you could go with a procedure like weighted-linear utility theory (WLU) (Bottomley and Williamson, 2023), but that gives up the principle of homotheticity, which involves indifference to mixing a given set of options with the worst possible outcome. Or you could go with a version of risk-weighted expected utility (REU) (Buchak, 2013) and give up the axiom of betweenness which suggests the order in which you are presented information shouldn’t alter your conclusions.[3]
It’s very unclear to us, for example, that giving up continuity is preferable to giving up homotheticity, and neither REU or WLU really logically eliminate issues with fanaticism (even if it seems in practice, say, WLU produces negative values for long shot possibilities in general)[4]. It seems once you switch from pure EV to other theories, whether it be REU, WLU, expected utility with rounding down, or some other future option, there isn’t an option that’s clearly best. Instead, many arguments rely on competing, but ultimately not easily resolvable, intuitions about which set of principles are best. Still, at worst, it seems the weaknesses in these alternative options are similar in scope to the amount of weakness provided to pure EV logically suggesting spending (and predictably wasting) all of our resources not on activities like x-risk prevention or insect welfare, but on actions like interacting with the multiverse or improving the welfare of protons.[5]
Broadly, we don’t think decision theories with various strengths and weaknesses, axiomatic and applied, are the type of claim you can be highly confident about. For this reason, we ultimately think you need to be unreasonably confident that a given procedure, or set of procedures that agree on the types of actions they suggest, is correct (possibly >90%) in order for the uncertainty across theories and what they imply not to impact your actions.[6] While there are arguments and counterarguments for many of these theories, we’re more confident in the broad claim that no arguments for one of these theories over all the others is decisive than we are in any particular argument or reply for any given theory.
So, we still plan to calculate the EV of the actions available to us, since we think in most cases this is identical to EV with rounding down. However, we won’t only calculate the EV of those actions anymore.[7] Now, we plan to use other decision theories as well, like REU and WLU, to get a better understanding of the riskiness of our options. This allows us, among other things, to identify options that are robustly good under decision theoretic uncertainty. (As Laura Duffy notes in a general discussion of risk aversion and cause prioritization and in the case of only the next few generations, work on corporate campaigns for chickens fits this description: it’s never the worst option and rarely produces negative value across these procedures). Using a range of decision theories also helps us represent internal disagreements more clearly: sometimes people agree on the probabilities and values of various outcomes, but disagree about how to weigh low probabilities, negative outcomes, or outcomes where our gamble doesn’t pay off. By formalizing these disagreements, we can sometimes resolve them.
Quantify, Quantify, Quantify
We’ve long built models to inform our decision-making.[8] However, probabilities can be unintuitive and the results of more rigorous calculations are often surprising. We’ve discovered during the CURVE sequence, for instance, that small changes to different kinds and levels of risk-aversion can alter what you ought to do; and, even if you assume that you ought to maximize expected utility, making small adjustments to future risk structures and value trajectories have significant impacts on the expected value of the existential risk mitigation work. And, of course, before the present sequence, RP has built many models, for example, to try to estimate some moral weights for animals, finding significant variance across them.[9]
What’s more, there are key areas where we know our models are inadequate. For example, it’s plausible that returns on different kinds of spending diminish at different rates, but estimating these rates remains difficult. We need to do more work to make thoughtful tradeoffs between, say, AI governance efforts and attempts to improve global health. Likewise, it’s less complex to assess the counterfactual credit due to some animal welfare interventions but extremely difficult to estimate the counterfactual credit due to efforts to reduce the risk of nuclear war. Since these kinds of factors could swing overall cost-effectiveness analyses, it’s crucial to keep improving our understanding of them. So, we’ll keep investigating these issues as systematically as we can.
None of this is to say we take the outputs of these types of quantitative models literally. We don’t. Nor is it to claim there is no place at all for qualitative inputs or reasoning in our decision-making. It is to say we think quantifying our uncertainties whenever possible generally helps us to make better decisions. The difficulty of accounting for all of the above issues are typically made worse, not better, when precise quantitative statements of beliefs or inputs are replaced by softer qualitative judgments. We think the work in the CURVE sequence has further bolstered this case that even when you can’t be precise in your estimates, quantifying your uncertainty can still significantly improve your ability to reason carefully.
Transparent Decision-Making
Knowing how to do good was hard enough before we introduced alternative decision theories. Still, RP has to make choices about how to distribute its resources, navigating deep uncertainty and, sometimes, differing perspectives among our leadership and staff. Since we want to make our choices sensitive to our evidential situation and transparent within the organization, we’re committed to finding a decision-making strategy that allows us to navigate this uncertainty in a principled manner. Thankfully, there are a wide range of deliberative decision-making processes, such as Delphi panels and citizen juries, that are available for just such purposes.[10] Moreover, there are a number of formal and informal methods of judgment aggregation that can be used at the end of the deliberative efforts.
We aren’t yet sure which of these particular decision procedures we’ll use and we expect creating such a process and executing it to take time.[11] All of these procedures have drawbacks in particular contexts and we don’t expect any such procedure to be able to handle all the specific decisions that RP faces. However, we’re confident that a clearly defined decision procedure that forces us to be explicit about the tradeoffs we’re making and why is superior to unilateral and intuition-based decision-making. We want to incorporate the best judgment of the leaders in our organization and own the intra- and inter-cause comparisons on which our decisions are based. So, we’re in the process of setting up such decision procedures and will report back what we can about how they’re operating.
Conclusion
We want to do good. The uncertainties involved in doing good are daunting, particularly given we are trying to take an impartial, scope sensitive, open to revision approach. However, RP aims to be a model of how to handle uncertainty well. In part, of course, this requires trying to reduce our uncertainty. But lately, we’ve been struck by how much it requires recognizing the depth of our uncertainty—all the way to the very frameworks we use for decision-making under uncertainty. We are trying to take this depth seriously without becoming paralyzed—which explains why we’re doubling down on modeling and collective decision-making procedures.
In practice, we suspect that a good rule of thumb is to spread our bets across our options. Essentially, we think we’ve entered a dizzying casino where the house won’t even tell us the rules of the game. And even if we knew the rules, we’d face a host of other uncertainties: the long-term payouts of various options, the risk of being penalized if we choose incorrectly among various courses of action, and a host of completely inscrutable possibilities where we have no idea what to think of them. In a situation of this type, it seems like a mistake to assume that one ruleset is correct and proceed accordingly. Instead, we want to find robustly good options among different plausible rulesets whenever we can. And when we can’t, we may want to distribute our resources in proportion to different reasonable approaches to prioritization.
This isn’t perfect or unobjectionable. But nothing is. RP will continue to do its best to make these decisions as transparently as we can, learning from our mistakes and continuing to try to advance the cause of improving the world.
Acknowledgements
The piece was written by Marcus A. Davis and Peter Wildeford. Thanks to David Moss, Abraham Rowe, Janique Behman, Carolyn Footitt, Hayley Clatterbuck, David Rhys Bernard, Cristina Schmidt Ibáñez, Jacob Peacock, Aisling Leow, Renan Araujo, Daniela R. Waldhorn, Onni Aarne, Melissa Guzikowski, and Kieran Greig for feedback. A special thanks to Bob Fischer for writing a draft of this post. The post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you’re interested in Rethink Priorities’ work, please consider subscribing to our newsletter. You can explore our completed public work here.
As discussed in this post, when we refer to “decision theories” we are referring to normative theories of rational choice without regard to the distinction between evidential decision theory and causal decision theory. That distinction is about whether one should determine their actions based on expected causal effects, causal decision theory, or, for evidential decision theory, based on whether you should do what actions have the best news value (taking the action you will have wanted to learn that you will do), whether or not this was driven by causal effects.
This is something we would like to see explored further in research. Presently, the choice of where to set the threshold could seem to be somewhat arbitrary, with no current solid arguments about where to set such a threshold that doesn’t refer to hypothetical or real cases and consider whether outcomes of those cases are acceptable.
Violations of homotheticity and betweenness are both violations of the principle of independence, which decomposes into these two principles. As such, both REU and WLU violate independence.
We are aware that discussion of these principles can sound rather abstract. We think it’s fine to be unfamiliar with these axioms and what they imply (we also weren’t familiar with them before the past few years). What seems less ideal is having an unshakable belief that a particular rank ordering of these abstract principles is simple or obvious such that you can easily select a particular decision theory as superior to others, particularly once you decide to avoid fanaticism.
Some may doubt that EV would require this, but if you preemptively rule out really implausible actions like extending the existence of the universe that could have a really high value if done, even if the probability is really small, then in practice you are likely calculating expected value maximization with rounding down. This is what we think most actors in the EA space have been doing in practice rather than pure expected value maximization. For more on why, and what axioms different decision theory options including expected value maximization with rounding down are giving up, see the WIT sequence supplement from Hayley Clatterbuck on Fanaticism, Risk Aversion, and Decision Theory. For more on why fanaticism doesn’t endorse x-risk prevention or work on insects see Fanatical EAs should support very weird projects by Derek Shiller. For more on how one might maintain most of the value of expectational reasoning while not requiring actions like this, see Tarnsey 2020 Exceeding Expectations: Stochastic Dominance as a General Decision Theory.
Suppose, as an example, you are ~50% confident in pure EV, and 50% confident that conditional on pure EV being incorrect, EV with rounding down is best. That would imply an absolute credence of 25% in EV with rounding down and a 25% chance you think some other non-EV option is correct. If you were 70% confident in EV and 70% confident conditional on it being false that EU with rounding down is right that would leave your split as 70% EV, 21% EU with rounding down, 9% something else. If you were instead equally uncertain about these strengths and weaknesses across the theories discussed above, it would imply a 25% credence to each of WLU, REU, pure EV, and EV with rounding down (assuming you assigned no weight to other known theories and to the possibility that there may, say, be future theories distinct from these known options). Overall, because these theories often directionally disagree on the best actions, you need to line up confidence across theories to be just right to avoid uncertainty in what actions are recommended.
A counterargument here would be to say that expected utility or expected utility with rounding down is clearly superior to these other options and as such we should do whatever it says. In addition to our broader concerns we’ve mentioned about the type of evidence that can be brought to bear not being definitive, one problem with this type of response is it assumes the correct aggregation method across decision procedures either heavily favors EV outputs (in practice or for a theoretical reason) or that we can be confident now that all these alternatives are incorrect (i.e. the weight we should put in them is below ~1%). Neither move seems justifiable from the present knowledge we have. It’s worth noting in their 2021 paper The Evidentialist’s Wager MacAskill et al. discuss the aggregation of evidential and causal decision theories but, for a variety of reasons, we don’t think the solutions posed for that related but separate dilemma apply here.
For example, we’ve built models to estimate the cost-effectiveness of particular interventions and to retrospectively assess the value of our research itself, both at the org level and at the level of individual projects. These models have often been inputs into our decision-making or to what we advise others to do.
In this context, citizen juries, Delphi panels, and other deliberative decision-making procedures would be designed to help us assign credences across different theories, or make specific decisions in the face of uncertainty and disagreement across participants.
We also aren’t sure when we’ll do these things as they all take time and money. For example, analyzing different decision making frameworks and thinking through the cost-curves across interventions could involve ~3-6 months of work from multiple people.
How Rethink Priorities is Addressing Risk and Uncertainty
This post is part of Rethink Priorities’ Worldview Investigations Team’s CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else.
Introduction
RP has committed itself to doing good. Given the limits of our knowledge and abilities, we won’t do this perfectly but we can do this in a principled manner. There are better and worse ways to work toward our goal. In this post, we discuss some of the practical steps that we’re taking to navigate uncertainty, improve our reasoning transparency, and make better decisions. In particular, we want to flag the value of three changes we intend to make:
Incorporating multiple decision theories into Rethink Priorities’ modeling
More rigorously quantifying the value of different courses of action
Adopting transparent decision-making processes
Using Multiple Decision Theories
Decision theories are frameworks that help us evaluate and make choices under uncertainty about how to act.[1] Should you work on something that has a 20% chance of success and a pretty good outcome if success is achieved, or work on something that has a 90% chance of success but only a weakly positive outcome if achieved? Expected value theory is the typical choice to answer that type of question. It calculates the expected value (EV) of each action by multiplying the value of each possible outcome by its probability and summing the results, recommending the action with the highest expected value. But because low probabilities can always be offset by corresponding increases in the value of outcomes, traditional expected value theory is vulnerable to the charge of fanaticism, “risking arbitrarily great gains at arbitrarily long odds for the sake of enormous potential” (Beckstead and Thomas, 2021). Put differently, it seems to recommend spending all of our efforts on actions that, predictably, won’t achieve our ends.
Alternative decision theories have significant drawbacks of their own, giving up one plausible axiom or another. The simple alternative is expected value maximization but with very small probabilities rounded down to zero. This gives up the axiom of continuity, which suggests for a relation of propositions A ≥ B ≥ C, that there exists some probability that would make you indifferent between B and a probabilistic combination of A and C. This violation causes some weird outcomes where, say, believing the chance of something is 1 in 100,000,000,000 can mean an action gets no weight but believing it’s 1.0000001 in 100,000,000,000 means that the option dominates your considerations if the expected value upon success is high enough, which is a kind of attenuated fanaticism. There are also other problems like setting the threshold for where you should round down.[2]
Alternatively, you could go with a procedure like weighted-linear utility theory (WLU) (Bottomley and Williamson, 2023), but that gives up the principle of homotheticity, which involves indifference to mixing a given set of options with the worst possible outcome. Or you could go with a version of risk-weighted expected utility (REU) (Buchak, 2013) and give up the axiom of betweenness which suggests the order in which you are presented information shouldn’t alter your conclusions.[3]
It’s very unclear to us, for example, that giving up continuity is preferable to giving up homotheticity, and neither REU or WLU really logically eliminate issues with fanaticism (even if it seems in practice, say, WLU produces negative values for long shot possibilities in general)[4]. It seems once you switch from pure EV to other theories, whether it be REU, WLU, expected utility with rounding down, or some other future option, there isn’t an option that’s clearly best. Instead, many arguments rely on competing, but ultimately not easily resolvable, intuitions about which set of principles are best. Still, at worst, it seems the weaknesses in these alternative options are similar in scope to the amount of weakness provided to pure EV logically suggesting spending (and predictably wasting) all of our resources not on activities like x-risk prevention or insect welfare, but on actions like interacting with the multiverse or improving the welfare of protons.[5]
Broadly, we don’t think decision theories with various strengths and weaknesses, axiomatic and applied, are the type of claim you can be highly confident about. For this reason, we ultimately think you need to be unreasonably confident that a given procedure, or set of procedures that agree on the types of actions they suggest, is correct (possibly >90%) in order for the uncertainty across theories and what they imply not to impact your actions.[6] While there are arguments and counterarguments for many of these theories, we’re more confident in the broad claim that no arguments for one of these theories over all the others is decisive than we are in any particular argument or reply for any given theory.
So, we still plan to calculate the EV of the actions available to us, since we think in most cases this is identical to EV with rounding down. However, we won’t only calculate the EV of those actions anymore.[7] Now, we plan to use other decision theories as well, like REU and WLU, to get a better understanding of the riskiness of our options. This allows us, among other things, to identify options that are robustly good under decision theoretic uncertainty. (As Laura Duffy notes in a general discussion of risk aversion and cause prioritization and in the case of only the next few generations, work on corporate campaigns for chickens fits this description: it’s never the worst option and rarely produces negative value across these procedures). Using a range of decision theories also helps us represent internal disagreements more clearly: sometimes people agree on the probabilities and values of various outcomes, but disagree about how to weigh low probabilities, negative outcomes, or outcomes where our gamble doesn’t pay off. By formalizing these disagreements, we can sometimes resolve them.
Quantify, Quantify, Quantify
We’ve long built models to inform our decision-making.[8] However, probabilities can be unintuitive and the results of more rigorous calculations are often surprising. We’ve discovered during the CURVE sequence, for instance, that small changes to different kinds and levels of risk-aversion can alter what you ought to do; and, even if you assume that you ought to maximize expected utility, making small adjustments to future risk structures and value trajectories have significant impacts on the expected value of the existential risk mitigation work. And, of course, before the present sequence, RP has built many models, for example, to try to estimate some moral weights for animals, finding significant variance across them.[9]
What’s more, there are key areas where we know our models are inadequate. For example, it’s plausible that returns on different kinds of spending diminish at different rates, but estimating these rates remains difficult. We need to do more work to make thoughtful tradeoffs between, say, AI governance efforts and attempts to improve global health. Likewise, it’s less complex to assess the counterfactual credit due to some animal welfare interventions but extremely difficult to estimate the counterfactual credit due to efforts to reduce the risk of nuclear war. Since these kinds of factors could swing overall cost-effectiveness analyses, it’s crucial to keep improving our understanding of them. So, we’ll keep investigating these issues as systematically as we can.
None of this is to say we take the outputs of these types of quantitative models literally. We don’t. Nor is it to claim there is no place at all for qualitative inputs or reasoning in our decision-making. It is to say we think quantifying our uncertainties whenever possible generally helps us to make better decisions. The difficulty of accounting for all of the above issues are typically made worse, not better, when precise quantitative statements of beliefs or inputs are replaced by softer qualitative judgments. We think the work in the CURVE sequence has further bolstered this case that even when you can’t be precise in your estimates, quantifying your uncertainty can still significantly improve your ability to reason carefully.
Transparent Decision-Making
Knowing how to do good was hard enough before we introduced alternative decision theories. Still, RP has to make choices about how to distribute its resources, navigating deep uncertainty and, sometimes, differing perspectives among our leadership and staff. Since we want to make our choices sensitive to our evidential situation and transparent within the organization, we’re committed to finding a decision-making strategy that allows us to navigate this uncertainty in a principled manner. Thankfully, there are a wide range of deliberative decision-making processes, such as Delphi panels and citizen juries, that are available for just such purposes.[10] Moreover, there are a number of formal and informal methods of judgment aggregation that can be used at the end of the deliberative efforts.
We aren’t yet sure which of these particular decision procedures we’ll use and we expect creating such a process and executing it to take time.[11] All of these procedures have drawbacks in particular contexts and we don’t expect any such procedure to be able to handle all the specific decisions that RP faces. However, we’re confident that a clearly defined decision procedure that forces us to be explicit about the tradeoffs we’re making and why is superior to unilateral and intuition-based decision-making. We want to incorporate the best judgment of the leaders in our organization and own the intra- and inter-cause comparisons on which our decisions are based. So, we’re in the process of setting up such decision procedures and will report back what we can about how they’re operating.
Conclusion
We want to do good. The uncertainties involved in doing good are daunting, particularly given we are trying to take an impartial, scope sensitive, open to revision approach. However, RP aims to be a model of how to handle uncertainty well. In part, of course, this requires trying to reduce our uncertainty. But lately, we’ve been struck by how much it requires recognizing the depth of our uncertainty—all the way to the very frameworks we use for decision-making under uncertainty. We are trying to take this depth seriously without becoming paralyzed—which explains why we’re doubling down on modeling and collective decision-making procedures.
In practice, we suspect that a good rule of thumb is to spread our bets across our options. Essentially, we think we’ve entered a dizzying casino where the house won’t even tell us the rules of the game. And even if we knew the rules, we’d face a host of other uncertainties: the long-term payouts of various options, the risk of being penalized if we choose incorrectly among various courses of action, and a host of completely inscrutable possibilities where we have no idea what to think of them. In a situation of this type, it seems like a mistake to assume that one ruleset is correct and proceed accordingly. Instead, we want to find robustly good options among different plausible rulesets whenever we can. And when we can’t, we may want to distribute our resources in proportion to different reasonable approaches to prioritization.
This isn’t perfect or unobjectionable. But nothing is. RP will continue to do its best to make these decisions as transparently as we can, learning from our mistakes and continuing to try to advance the cause of improving the world.
Acknowledgements
The piece was written by Marcus A. Davis and Peter Wildeford. Thanks to David Moss, Abraham Rowe, Janique Behman, Carolyn Footitt, Hayley Clatterbuck, David Rhys Bernard, Cristina Schmidt Ibáñez, Jacob Peacock, Aisling Leow, Renan Araujo, Daniela R. Waldhorn, Onni Aarne, Melissa Guzikowski, and Kieran Greig for feedback. A special thanks to Bob Fischer for writing a draft of this post. The post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you’re interested in Rethink Priorities’ work, please consider subscribing to our newsletter. You can explore our completed public work here.
As discussed in this post, when we refer to “decision theories” we are referring to normative theories of rational choice without regard to the distinction between evidential decision theory and causal decision theory. That distinction is about whether one should determine their actions based on expected causal effects, causal decision theory, or, for evidential decision theory, based on whether you should do what actions have the best news value (taking the action you will have wanted to learn that you will do), whether or not this was driven by causal effects.
This is something we would like to see explored further in research. Presently, the choice of where to set the threshold could seem to be somewhat arbitrary, with no current solid arguments about where to set such a threshold that doesn’t refer to hypothetical or real cases and consider whether outcomes of those cases are acceptable.
Violations of homotheticity and betweenness are both violations of the principle of independence, which decomposes into these two principles. As such, both REU and WLU violate independence.
We are aware that discussion of these principles can sound rather abstract. We think it’s fine to be unfamiliar with these axioms and what they imply (we also weren’t familiar with them before the past few years). What seems less ideal is having an unshakable belief that a particular rank ordering of these abstract principles is simple or obvious such that you can easily select a particular decision theory as superior to others, particularly once you decide to avoid fanaticism.
Some may doubt that EV would require this, but if you preemptively rule out really implausible actions like extending the existence of the universe that could have a really high value if done, even if the probability is really small, then in practice you are likely calculating expected value maximization with rounding down. This is what we think most actors in the EA space have been doing in practice rather than pure expected value maximization. For more on why, and what axioms different decision theory options including expected value maximization with rounding down are giving up, see the WIT sequence supplement from Hayley Clatterbuck on Fanaticism, Risk Aversion, and Decision Theory. For more on why fanaticism doesn’t endorse x-risk prevention or work on insects see Fanatical EAs should support very weird projects by Derek Shiller. For more on how one might maintain most of the value of expectational reasoning while not requiring actions like this, see Tarnsey 2020 Exceeding Expectations: Stochastic Dominance as a General Decision Theory.
Suppose, as an example, you are ~50% confident in pure EV, and 50% confident that conditional on pure EV being incorrect, EV with rounding down is best. That would imply an absolute credence of 25% in EV with rounding down and a 25% chance you think some other non-EV option is correct. If you were 70% confident in EV and 70% confident conditional on it being false that EU with rounding down is right that would leave your split as 70% EV, 21% EU with rounding down, 9% something else. If you were instead equally uncertain about these strengths and weaknesses across the theories discussed above, it would imply a 25% credence to each of WLU, REU, pure EV, and EV with rounding down (assuming you assigned no weight to other known theories and to the possibility that there may, say, be future theories distinct from these known options). Overall, because these theories often directionally disagree on the best actions, you need to line up confidence across theories to be just right to avoid uncertainty in what actions are recommended.
A counterargument here would be to say that expected utility or expected utility with rounding down is clearly superior to these other options and as such we should do whatever it says. In addition to our broader concerns we’ve mentioned about the type of evidence that can be brought to bear not being definitive, one problem with this type of response is it assumes the correct aggregation method across decision procedures either heavily favors EV outputs (in practice or for a theoretical reason) or that we can be confident now that all these alternatives are incorrect (i.e. the weight we should put in them is below ~1%). Neither move seems justifiable from the present knowledge we have. It’s worth noting in their 2021 paper The Evidentialist’s Wager MacAskill et al. discuss the aggregation of evidential and causal decision theories but, for a variety of reasons, we don’t think the solutions posed for that related but separate dilemma apply here.
For example, we’ve built models to estimate the cost-effectiveness of particular interventions and to retrospectively assess the value of our research itself, both at the org level and at the level of individual projects. These models have often been inputs into our decision-making or to what we advise others to do.
Another example of the fragility of models is visible in Jamie Elsey’s and David Moss’s post Incorporating and visualizing uncertainty in cost effectiveness analyses: A walkthrough using GiveWell’s estimates for StrongMinds examining how modeling choices involving handling uncertainty can significantly alter your conclusions.
In this context, citizen juries, Delphi panels, and other deliberative decision-making procedures would be designed to help us assign credences across different theories, or make specific decisions in the face of uncertainty and disagreement across participants.
We also aren’t sure when we’ll do these things as they all take time and money. For example, analyzing different decision making frameworks and thinking through the cost-curves across interventions could involve ~3-6 months of work from multiple people.