[Question] Decision making under model ambiguity, moral uncertainty, and other agents with free will?

I am new to EA, and as I work on decision theory, game theory and welfare economics, I’m wondering about how individual people or groups of people in the community normally take decisions such as prioritisation or budgeting. More precisely:

What is the state of the art or best practise or common practise in the EA community for taking individual decisions if there are various forms of ambiguity and/​or non-quantifiable uncertainty involved, such as...

  • model ambiguity (e.g. about prior probability distributions, conditional probabilities and other model parameters)

  • moral uncertainty (e.g. about risk attitudes, inequality aversion, time preferences, moral status of beings, value systems, etc.)

  • strategic ambiguity (e.g. how rational are other agents and what can we really assume that they will do given that there might be free will?)

My own thoughts on how it might be done at least in theory are summarized below.

In that context, I also wonder:

Is there some place for smart collective decision making in this, e.g. in oder to

  • increase epistemic quality of decisions through crowd-sourcing information

  • raise acceptability of decisions and thus improve implementation quality

  • deal robustly with moral uncertainty and diverse value assessments

And if so, what collective decision making mechanisms are most appropriate?

I’d be more than happy to hear about your thoughts on this! Jobst

Rational choice

If there’s no model ambiguity, no moral uncertainty, and no other agents, the standard recipe (using Bayesian rational choice theory) would probably go somewhat like this:

  1. Identify all relevant actions you could take now or later.

  2. Idenfity all relevant strategies you might pursue to decide what to do in the current situation and in each possible future decision situation.

  3. Identify all possible relevant features of the world that may influence the consequences of your actions.

  4. Identify all possible outcomes (possible trajectories of future states of the world) that may arise from all possible combinations of your possible strategies and the possible relevant features of the world.

  5. Identify all possible mechanisms that may lead from all those possible strategy-feature combinations to those possible outcomes.

  6. Represent these mechanisms by a suitable deterministic or, more likely, probabilistic causal model.

  7. Estimate the conditional probabilities occurring in the individual steps in that causal model. (Maybe use Bayesian updating on the basis of some prior plus some data for this)

  8. Identify other parameters of the model.

  9. Model your beliefs about what the actual features of the world and the remaining parameters of the model are in the form of a subjective joint probability distribution over all possible features-parameters combinations, taking into account possible correlations between these features and parameters. (Maybe use Bayesian updating for this as well)

  10. Use your beliefs, the causal model, and its conditional probabilities to calculate for each possible strategy an resulting probability distribution over all possible outcomes of that strategy.

  11. Evaluate each possible outcome according to your moral value system and condense this evaluation into one numerical evaluation for that outcome. (You might or might not call this evaluation the “utility” of the outcome)

  12. Use the outcome probability distribution and the outcome evaluations to evaluate each possible strategy, taking into account your risk attitudes. E.g., calculate the expected value of the outcome evaluation given the outcome distribution under that strategy (as in expected-utility theory), or use some other aggregation formula that aggregates the possible outcome evaluations into a strategy evaluation given the outcome distribution under that strategy (e.g., some form of (cumulative) prospect theory).

  13. Identify the strategy with the largest evaluation and adopt it. (One might or might not call this the “best” or “optimal” strategy.)

Complicated as this might seem, e.g. because there might be infinitely manys possible actions, features, parameter values, or consequences, it is at least a relatively clear procedure.

But what if there is any form of ambiguity or uncertainty that does not fit well with the Bayesian approach of dealing with probabilistic uncertainty?

Let me next look at what happens if there are other relevant agents in the world than just you. Since this turns out to become quite complicated, you might want to skip the next section.

Game theory

Assume there’s no model ambiguity and no moral uncertainty, but there are other agents, but we can assume all of them to be rational. Then a possible approach goes somewhat like this (or possibly even more complicated, depending on the version of Bayesian or epistemic game theory one adopts):

  1. Proceed as above until step 9, but additionally:

    1. Identify other relevant agents and their possible actions and strategies.

    2. Incorporate this into the causal model.

    3. Estimate the other agents’ non-strategic beliefs (like you did your own in step 9 above), value systems, and risk attitudes, and model them via a large joint subjective probability distribution.

  2. Now comes the complicated part: Instead of proceeding as above from step 10 on, we first need to resolve the additional uncertainty that all agents have about what everyone will do, which will influence what they themselves are likely to do. Because we assume everyone is rational, we can do this by identifying all possible consistent profiles of strategic beliefs of all agents about all other agents’ strategies. One version of how one might do this in theory is the following:

    1. Assume each agent will ultimately encode their beliefs of the other agents in the form of another joint subjective probability distribution over all possible combinations of strategies of the other agents. Call these possible probability distributions the possible strategic beliefs of the agent.

    2. Call each possible combination of strategic beliefs, one for each agent, a possible strategic belief profile. For each possible strategic belief profile (called the “assumed strategic belief profile” in what follows), do the following:

      1. For each agent, including yourself, do the following:

        1. Use that agent’s non-strategic and strategic beliefs, the causal model, and its conditional probabilities to calculate for each possible strategy of that agent a resulting probability distribution over all possible outcomes of that strategy (similar to step 10 above)

        2. Use your beliefs about that agent’s moral value system and risk attitudes to identify which of their strategies that agent will evaluate the largest if they proceed as in steps 11–13 above. This results in one or, less likely, several “best” strategies for that agent.

      2. Check whether the resulting combinations of all agents’ “best” strategies are compatible with what you assumed that all agents believe about everyone’s strategies. More precisely:

        1. For each pair of agents X, Y, and each of Y’s strategies that have a positive probability in the assumed strategic beliefs of X (!), verify that this strategy was indeed identified as a “best” strategy for Y. If this is not the case, then X’s belief that rational agent Y might use strategy S is not a rational belief.

        2. If at least one of these tests fails, then the assumed strategic belief profile is not consistent with (common knowledge of) rationality: If agents would hold these beliefs, at least one of them would act in a way that violates these beliefs.

    3. Each possible strategic belief profile that passes this consistency test is a consistent strategic belief profile. (Game theorists call such a thing an “equlibrium” of some kind)

  3. If we are lucky and there is exactly one consistent strategic belief profile, we may proceed as in steps 10–13 above, using these consistent strategic beliefs to identify our own “best” strategy, and then adopt it.

  4. If we are unlucky and there are several consistent strategic belief profiles, we face what game theorists call an equilibrium selection problem. This can be seen as yet another form of ambiguity: the ambiguity about what everyone’s strategic beliefs are.

  5. If we are extremely unlucky, there is not even one consistent strategic belief profile and the strategic belief ambiguity becomes even larger.

So, even in a Bayesian rational choice setting, as soon as there is more than one agent, there might easily arise ambiguity that can not easily be dealt with within the Bayesian rationality framework itself.

Let’s look at something slightly simpler next:

Evaluation under model ambiguity

Let’s assume that we are the only agent and the only uncertainty is what values certain features of the world and certain parameters of our causal model have, but that that uncertainty is not quantifiable in the form of subjective beliefs as required in step 9 of the original recipe. In other words, we might have an idea of what values these features and parameters might have, but not how likely each possible value is. Let’s call this model ambiguity.

If our goal is only to evaluate a certain possible strategy rather than making a choice between several strategies, we might basically follow the original recipe for each possible value of the respective features and parameters and in the end use the “Hurwicz criterion” (Hurwicz 1951, https://​​cowles.yale.edu/​​sites/​​default/​​files/​​files/​​pub/​​cdp/​​s-0370.pdf):

  1. Proceed as in steps 1–8 of the original recipe.

  2. For each combination of values of the ambiguous features and parameters that you deem possible, perform steps 9–12 of the original recipe to calculate a conditional evaluation of the given strategy under the assumption of that possible combination of values.

  3. Among all these conditional evaluations for different feature and parameter combinations, identify the smallest (“worst-case”) conditional evaluation, W, and the largest (“best-case”) conditional evaluation, B.

  4. Calculate the overall evaluation of the strategy by putting h W + (1 – h) B, where h is a number between 0 and 1 which represents your degree of ambiguity aversion.

Fine. But normally we only evaluate strategies because we want to make a choice between several strategies.

Choice under model ambiguity

To choose under model ambiguity, we could of course simply apply step 13 of the original recipe on the basis of the overall evaluations derived above, using the same value of h for each strategy.

The problem is that we might also want to do it differently...

Assume we have two strategies S and S’, and their worst- and best-case evaluations are W(S)=B(S)=1, W(S’)=0, and B(S’)=2. Then it might seem that if h>1/​2, we should adopt strategy S and otherwise strategy S’. But what if there are thousands of possible parameter values and for all but one parameter value strategy S’ is better than strategy S? Shouldn’t we maybe rather adopt S’ then?

The difference between the two approaches is basically this:

  • Either deal with model ambiguity first and identify “best” strategies later. Evaluate each strategy separately, taking into account ambiguity as above, and only then compare strategies’ overall evaluations as the last step to identify a “best” strategy.

  • Or identify “best” strategies first and deal with model ambiguity later. For each possible scenario (=feature-parameter combination), identify a “conditionally best” strategy, and only then deal with the model ambiguity by picking an “overall best” strategy in some way from all these “conditionally best” strategies?

It seems that many deliberation processes happening in the real world are more like the second form: What should we do if X? S! What should we do if Y instead? S’! Given these two optimal strategies and the fact that we do not know whether X or Y is true, should we now actually do S or S’?

Moral uncertainty

In principle, we might deal with moral uncertainty in exactly the same way as with model ambiguity:

  • Either deal with moral uncertainty first and identify “best” strategies later. Evaluate each strategy separately, taking into account moral uncertainty in the same way as taking into account model ambiguity above: overall strategy evaluation = h W + (1 – h) B, where W and B are the worst- and best-case evaluations across all possible moral value systems you consider. Only then compare strategies’ overall evaluations as the last step to identify a “best” strategy.

  • Or identify “best” strategies first and deal with moral uncertainty later. For each possible moral value systems, identify a “conditionally best” strategy, and only then deal with the moral uncertainty by picking an “overall best” strategy in some way from all these “conditionally best” strategies?

I am no expert on moral uncertainty at all, so my questions are here:

  • Is this in line with philisophical theory on moral uncertainty?

  • How is it done in reality?

Other forms of ambiguity

It is straighforward to use the exact same two possible approaches as above to also deal with all kinds of other ambiguities, such as...

  • Strategic ambiguity arising from non-uniqueness or non-existence of consistent strategic belief profiles

  • Ambiguity arising from the inability of attaching subjective probabilities to the strategies of other agents in the first place, e.g. because they might not be rational or too different from us, or because one feels that this makes no sense if agents have free will.

But it might also turn out that different forms of ambiguity should be treated in different ways, e.g., sometimes using the first and sometimes using the second approach, or using different values of h for each form of ambiguity.

I think this means there is also a meta-form of ambiguity: methodological ambiguity.

I’ll end here with a final idea: Should we deal with some forms of ambiguity, maybe at least with methodological ambiguity, by making collective choices? I.e.:

  • Should we use some formal collective choice mechanism to decide on a case-by-case basis between the methodological options, or between assumptions or beliefs about relevant features of the world, key parameters, and/​or strategies of other relevant agents?

  • And if so, what collective choice mechanism would be appropriate for this? One that focusses on efficiency like majoritarian mechanisms do, or one that focusses on consensus?

No comments.