Solving the moral cluelessness problem with Bayesian joint probability distributions

Hillary Greaves laid out a problem of “moral cluelessness” in her paper Cluelessness, http://​​users.ox.ac.uk/​​~mert2255/​​papers/​​cluelessness.pdf

Primer on cluelesness

There are some resources on this problem below, taken from the Oxford EA Fellowship materials:

(Edit: one text deprecated and redacted)

Hilary Greaves on Cluelessness, 80000 Hours podcast (25 min) https://​​80000hours.org/​​podcast/​​episodes/​​hilary-greaves-global-priorities-institute/​​

If you value future people, why do you consider short-term effects? (20 min) https://​​forum.effectivealtruism.org/​​posts/​​ajZ8AxhEtny7Hhbv7/​​if-you-value-future-people-why-do-you-consider-near-term

Simplifying cluelessness (30 min) https://​​philiptrammell.com/​​static/​​simplifying_cluelessness.pdf

Finally there’s this half hour talk of Greaves presenting her ideas around cluelessness:

https://​​www.youtube.com/​​watch?v=fySZIYi2goY

The complex cluelessness problem

Greaves has the following worry about complex cluelessness:

The cases in question have the following structure:

For some pair of actions of interest A1, A2,

- (CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2;

- (CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1;

- (CC3) It is unclear how to weigh up these reasons against one another.

She then uses donating bednets to poor countries as an example of this. By donating bednets, we can save lives at scale. Saving lives could increase the fertility rate, eventually leading to a higher population. There are good reasons to think that a higher population is net-negative for the long-term, or could even constitute an existential threat (CC1). On the other hand, it’s entirely possible that saving lives in the short term could improve humanity’s long term prospects (CC2) - perhaps a higher population now will lead to a larger number of people throughout the rest of the universe’s history enjoying their lives, or perhaps the diminished human tragedy in our own century (because of lives saved) could lead to a more stable and better-educated world that better prepares for existential risk. But as I lay out below, I don’t know why this would lead us to CC3.

A “set point/​Control Theory” solution

This solution applies to the specific example but doesn’t address the general problem.

Many dynamic systems have a way of restoring equilibria that are out of balance. In nature, overpopulation of a species in an ecosystem leads to famine, which leads to a decrease in population, and so overall, the long-run species population may not change.

For human overpopulation, if overpopulation becomes a serious problem, lower population growth now is likely to lead to fewer efforts to constrain population in the future. Conversely, higher population growth now is likely to lead to more efforts to constrain population in the future. Thus, by saving lives now (the short term), we might create a problem that is solved in the medium term, with no long-run consequences.

It may be that many processes tend towards equilibria. The key problem for a longtermist in valuing the long-term danger of an intervention may be its effect on existential risk in the next few hundred years, and medium-term consequences should be evaluated in that context.

A general Bayesian joint probability solution

Hillary Greaves gives this solution in her paper, I believe:

Just as orthodox subjective Bayesianism holds, here as elsewhere, rationality requires that an agent have well-defined credences. Thus, insofar as we are rational, each of us will simply settle, by whatever means, on her own credence function for the relevant possibilities. And once we have done that, subjective c-betterness is simply a matter of expected value with respect to whatever those credences happen to be. In this model, the subjective c-betterness facts may well vary from one agent to another (even in the absence of any differences in the evidence held by the agents in question), but there is nothing else distinctive of ‘cluelessness’ cases; in particular, (2) there is no obstacle to consequences guiding actions, and (3) there is no rational basis for decision discomfort.

To solve the malaria net problem, we can calculate the probability of things like:

  • Short-run fertility meaningfully impacts long-run fertility

  • Likely increase in fertility due to the malaria net intervention

  • Each million of population increase will increase existential risk by x.

  • Fewer deaths will yield some level of improved well-being and community resilience; the additional resilience and well-being improves long-run global education and decision-making around existential risk, lowering existential risk by x

  • ...and so on

Then, we consider two scenarios:

  1. Donate bednets

  2. Do not donate bednets

For each scenario:

  1. Calculate the joint probability of existential risk and other long-term consequences under each of these scenarios, given these propositions. We don’t need a full model of existential risk; it’s enough to start with an estimate of the relationship between existential risk and relevant variables like population increase, global education, etc.

  2. Weight the estimated value of each action by the joint probability.

  3. Select the action with the highest estimated value based on the joint probability.

What am I missing?

Greaves seems to anticipate this response, as above, and goes on to say:

The alternative line I will explore here begins from the suggestion that in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’).21 Intuitively, the idea here is that when the evidence fails conclusively to recommend any particular credence function above certain others, agents are rationally required to remain neutral between the credence functions in question: to include all such equally-recommended credence functions in their representor.

I am very confused by this turn of reasoning. I don’t think I understand what she means by credence function, and imprecise credences. But I don’t really understand the problem of imprecise credence, or why this is necessarily related to a ‘many-membered set of probability functions’. For our malaria bednets question, we still have one probability function (you might think of that as aggregate well-being across the history of the universe, which will for our purposes can be reduced to existential risk or probability humanity becomes extinct within the next 500 years). We simply

  • Take the probability distributions of each thing we are uncertain about

  • Find the joint probability distribution for each of those things under each of our scenarios

  • Compare the joint probability distributions to find the action with the highest expected value

and we’re done! I don’t see how the problem of a whole set of probability functions is inevitable, or even how we anticipate it might be a problem here.

Can anyone shed light on this?