Solving the moral cluelessness problem with Bayesian joint probability distributions
Hillary Greaves laid out a problem of “moral cluelessness” in her paper Cluelessness, http://users.ox.ac.uk/~mert2255/papers/cluelessness.pdf
Primer on cluelesness
There are some resources on this problem below, taken from the Oxford EA Fellowship materials:
(Edit: one text deprecated and redacted)
Hilary Greaves on Cluelessness, 80000 Hours podcast (25 min) https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/
If you value future people, why do you consider short-term effects? (20 min) https://forum.effectivealtruism.org/posts/ajZ8AxhEtny7Hhbv7/if-you-value-future-people-why-do-you-consider-near-term
Simplifying cluelessness (30 min) https://philiptrammell.com/static/simplifying_cluelessness.pdf
Finally there’s this half hour talk of Greaves presenting her ideas around cluelessness:
https://www.youtube.com/watch?v=fySZIYi2goY
The complex cluelessness problem
Greaves has the following worry about complex cluelessness:
The cases in question have the following structure:
For some pair of actions of interest A1, A2,
- (CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2;
- (CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1;
- (CC3) It is unclear how to weigh up these reasons against one another.
She then uses donating bednets to poor countries as an example of this. By donating bednets, we can save lives at scale. Saving lives could increase the fertility rate, eventually leading to a higher population. There are good reasons to think that a higher population is net-negative for the long-term, or could even constitute an existential threat (CC1). On the other hand, it’s entirely possible that saving lives in the short term could improve humanity’s long term prospects (CC2) - perhaps a higher population now will lead to a larger number of people throughout the rest of the universe’s history enjoying their lives, or perhaps the diminished human tragedy in our own century (because of lives saved) could lead to a more stable and better-educated world that better prepares for existential risk. But as I lay out below, I don’t know why this would lead us to CC3.
A “set point/Control Theory” solution
This solution applies to the specific example but doesn’t address the general problem.
Many dynamic systems have a way of restoring equilibria that are out of balance. In nature, overpopulation of a species in an ecosystem leads to famine, which leads to a decrease in population, and so overall, the long-run species population may not change.
For human overpopulation, if overpopulation becomes a serious problem, lower population growth now is likely to lead to fewer efforts to constrain population in the future. Conversely, higher population growth now is likely to lead to more efforts to constrain population in the future. Thus, by saving lives now (the short term), we might create a problem that is solved in the medium term, with no long-run consequences.
It may be that many processes tend towards equilibria. The key problem for a longtermist in valuing the long-term danger of an intervention may be its effect on existential risk in the next few hundred years, and medium-term consequences should be evaluated in that context.
A general Bayesian joint probability solution
Hillary Greaves gives this solution in her paper, I believe:
Just as orthodox subjective Bayesianism holds, here as elsewhere, rationality requires that an agent have well-defined credences. Thus, insofar as we are rational, each of us will simply settle, by whatever means, on her own credence function for the relevant possibilities. And once we have done that, subjective c-betterness is simply a matter of expected value with respect to whatever those credences happen to be. In this model, the subjective c-betterness facts may well vary from one agent to another (even in the absence of any differences in the evidence held by the agents in question), but there is nothing else distinctive of ‘cluelessness’ cases; in particular, (2) there is no obstacle to consequences guiding actions, and (3) there is no rational basis for decision discomfort.
To solve the malaria net problem, we can calculate the probability of things like:
Short-run fertility meaningfully impacts long-run fertility
Likely increase in fertility due to the malaria net intervention
Each million of population increase will increase existential risk by x.
Fewer deaths will yield some level of improved well-being and community resilience; the additional resilience and well-being improves long-run global education and decision-making around existential risk, lowering existential risk by x
...and so on
Then, we consider two scenarios:
Donate bednets
Do not donate bednets
For each scenario:
Calculate the joint probability of existential risk and other long-term consequences under each of these scenarios, given these propositions. We don’t need a full model of existential risk; it’s enough to start with an estimate of the relationship between existential risk and relevant variables like population increase, global education, etc.
Weight the estimated value of each action by the joint probability.
Select the action with the highest estimated value based on the joint probability.
What am I missing?
Greaves seems to anticipate this response, as above, and goes on to say:
The alternative line I will explore here begins from the suggestion that in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’).21 Intuitively, the idea here is that when the evidence fails conclusively to recommend any particular credence function above certain others, agents are rationally required to remain neutral between the credence functions in question: to include all such equally-recommended credence functions in their representor.
I am very confused by this turn of reasoning. I don’t think I understand what she means by credence function, and imprecise credences. But I don’t really understand the problem of imprecise credence, or why this is necessarily related to a ‘many-membered set of probability functions’. For our malaria bednets question, we still have one probability function (you might think of that as aggregate well-being across the history of the universe, which will for our purposes can be reduced to existential risk or probability humanity becomes extinct within the next 500 years). We simply
Take the probability distributions of each thing we are uncertain about
Find the joint probability distribution for each of those things under each of our scenarios
Compare the joint probability distributions to find the action with the highest expected value
and we’re done! I don’t see how the problem of a whole set of probability functions is inevitable, or even how we anticipate it might be a problem here.
Can anyone shed light on this?
Hey!
I think Hilary Greaves does a great job at explaining what cluelessness in non-jargon terms in her most recent appearance on 80K podcast.
As far as I understand it, cluelessness arises because, as we don’t have sufficient evidence, we’re very unsure about what our credence should be, to the point they feel -or maybe just are- arbitrary. In this case, you could still just carry out the expected value calculation and opt to do the most choice worthy action as you suggest. However, it seems unsatisfying because the credence function you use is arbitrary. Indeed, given your level of evidence, you could very well have opted for another set of beliefs that would have lead you to act differently.
Thus, one might argue that in order to be rational in this type of predicament, you have to consider several probability functions that are consistent with the evidence you have. In other words, you are required to have “imprecise credences” because you cannot determine in a principled manner which probability function you should use.
As Hilary Greaves herself points out in the podcast I mentioned above, if you’re not troubled by this, and you’re by yourself, you can just compute the expected value, but issues can arise when you try to coordinate with other agents that have different arbitrary beliefs. This is why it might be important to take cluelessness seriously.
I hope this helps!
Her choice to use multiple, independent probability functions itself seems arbitrary to me, although I’ve done more reading since posting the above and have started to understand why there is a predicament.
Instead of multiple independent probability functions, you could start with a set of probability distributions for each of the items you are uncertain about, and then calculate the joint probability distribution by combining all of those distributions. That’ll give you a single probability density function on which you can base your decision.
If you start with a set of several probability functions, with each representing a set of beliefs, then calculating their joint probability would require sampling randomly from each function according to some distribution specifying how likely each of the functions are. It can be done, with the proviso that you must have a probability distribution specifying the relative likelihood of each of the functions in your set.
However, I do worry the same problem arises in this approach in a different form. If you really do have no information about the probability of some event, then in Bayesian terms, your prior probability distribution is one that is completely uninformative. You might need to use an improper prior, and in that case, they can be difficult to update on in some circumstances. I think these are a Bayesian, mathematical representation of what Greaves calls an “imprecise credence”.
But I think the good news is that many times, your priors are not so imprecise that you can’t assign some probability distribution, even if it is incredibly vague. So there may end up not being too many problems where we can’t calculate expected long-term consequences for actions.
I do remain worrying, with Greaves, that GiveWell’s approach of assessing direct impact for each of its potential causes is woefully insufficient. Instead, we need to calculate out the very long term impact of each cause, and because of the value of the long-term future, anything that affects the probability of existential risk, even by an infinitesimal amount, will dominate the expected value of our intervention.
And I worry that this sort of approach could end up being extremely counterintuitive. It might lead us to the conclusion that promoting fertility by any means necessary is positive, or equally likely, to the conclusion that controlling and reducing fertility by any means necessary is positive. These things could lead us to want to implement extremely coercive measures, like banning abortion or mandating abortion depending on what we want the population size to be. Individual autonomy seems to fade away because it just doesn’t have comparable value. Individual autonomy could only be saved if we think it would lead to a safer and more stable society in the long run, and that’s extremely unclear.
And I think I reach the same conclusion that I think Greaves has, that one of the most valuable things you can do right now is to estimate some of the various contingencies, in order to lower the uncertainty and imprecision on various probability estimates. That’ll raise the expected value of your choice because it is much less likely to be the wrong one.
I’m not sure what makes you think that. Prof. Greaves does state that rational agents may be required “to include all such equally-recommended credence functions in their representor”. This feels a lot less arbitrary that deciding to pick a single prior among all those available and decide to compute the expected value of your actions based on it.
I agree that you could do that, but it seems even more arbitrary! If you think that choosing a set of probability functions was arbitrary, then having a meta-probability distribution over your probability distributions seems even more arbitrary, unless I’m missing something. It doesn’t seem to me like the kind of situations where going meta helps: intuitively, if someone is very unsure about what prior to use in the first place, they should also probably be unsure about coming up with a second-order probability distribution over their set of priors .
I do not think that’s what Prof. Greaves mean when she says “imprecise credence”. This article for the Stanford Encyclopedia of Philosophy explains the meaning of that phrase for philosophers. It also explains what a representor is in a better way that I did.
I think Prof. Greaves and Philip Trammel would disagree with that, which is why they’re talking about cluelessness. For instance, Phil writes:
Hope this helps.
> Hope this helps.
It does, thanks—at least, we’re clarifying where the disagreements are.
All you need to do to come up with that meta-probability distribution is to have some information about the relative value of each item in your set of probability functions. If our conclusion for a particular dilemma turns on a disagreement between virtue ethics, utilitarian ethics, and deontological ethics, this is a difficult problem that people will disagree strongly on. But can you even agree that these each bound, say, to be between 1% and 99% likely to be the correct moral theory? If so, you have a slightly informative prior and there is a possibility you can make progress. If we really have completely no idea, then I agree, the situation really is entirely clueless. But I think with extended consideration, many reasonable people might be able to come to an agreement.
I agree with this. If the question is, “can anyone, at any moment in time, give a sensible probability distribution for any question”, then I agree the answer is “no”.
But with some time, I think you can assign a sensible probability distribution to many difficult-to-estimate things that are not completely arbitrary nor completely uninformative. So, specifically, while I can’t tell you right now about the expected long-run value for giving to Malaria Consortium, I think I might be able to spend a year or so understanding the relationship between giving to Malaria Consortium and long-run aggregate sentient happiness, and that might help me to come up with a reasonable estimate of the distribution of values.
We’d still be left with a case where, very counterintuitively, the actual act of saving lives is mostly only incidental to the real value of giving to Malaria Consortium, but it seems to me we can probably find a value estimate.
About this, Greaves (2016) says,
And I wholeheartedly agree, but it doesn’t follow from the fact you can’t immediately form an opinion about it that you can’t, with much research, make an informed estimate that has better than an entirely indeterminate or undefined value.
EDIT: I haven’t heard Greaves’ most recent podcast on the topic, so I’ll check that out and see if I can make any progress there.
EDIT 2: I read the transcript to the podcast that you suggested, and I don’t think it really changes my confidence that estimating a Bayesian joint probability distribution could get you past cluelessness.
My reaction to that (beyond I should read Askell’s piece) is that I disagree with Greaves that a lifetime of research could resolve the subject matter for something like giving to Malaria Consortium. I think it’s quite possible one could make enough progress to arrive at an informative probability distribution. And perhaps it only says “across the probability distribution, there’s a 52% likelihood that giving to x charity is good and a 48% probability that it’s bad”, but actually, if the expected value is pretty high, it’s still strong impetus to give to x charity.
I still reach the problem where we’ve arrived at a framework where our choices for short-term interventions are probably going to be dominated by their long-run effects, and that’s extremely counterintuitive, but at least I have some indication.
I remain partial to the path forward I proposed here: Doing good while clueless
Thanks! That was helpful, and my initial gut reaction is I entirely agree :-)
Have you had an opportunity to see how Hillary Greaves might react to this line of thinking? If I had to hazard a guess I imagine she’d be fairly sympathetic to the view you expressed.
Interesting! Thank you for writing this, this is something I was also wondering about while reading for the Warwick EA fellowship. My intuition is also that in the case of a “many-membered set of probability functions”, I’d define a prior over those and then compute an expected value as if nothing happened. I acknowledge that there is substantial (or even overwhelming) uncertainty sometimes and I can understand the impulse of wanting a separate conceptual handle for that. But it’s still “decision making under uncertainty” and should therefore be subsumable under Bayesianism.
I feel similar to ben.smith that I might be completely missing something. But I also wonder if this confusion might just be an echo of the age-old Bayesianism vs Frequentism debate, where people have different intuition about whether priors over probability distributions are a-ok.
There is an argument from intuition that carry some force by Schoenfield (2012) that we can’t use a probability function:
Intuitively, this sounds right. And if you went from this problem trying to understand solve the crime on intuition, you might really have no idea. Reading the passage, it sounds mind-boggling.
On the other hand, if you applied some reasoning and study, you might be able to come up with some probability estimates. You could identify the conditioning of P(Smith did it|an eyewitness says Smith did it), including a probability distribution on that probability itself, if you like. You can identify how to combine evidence from multiple witnesses, i.e., P(Smith did it|eyewitness 1 says Smith did it) & P(Smith did it|eyewitness 2 says Smith did it), and so on up to 68 and 69. You can estimate the independence of eyewitnesses, and from that work out how to properly combine evidence from multiple eyewitnesses.
And it might turn out that you don’t update as a result of the extra eyewitness, under some circumstances. Perhaps you know the eyewitnesses aren’t independent; they’re all card-carrying members of the “We hate Smith” club. In that case it simply turns out that the extra eye-witness is irrelevant to the problem; it doesn’t qualify as evidence, so it it doesn’t mean you’re insensitive to “mild evidential sweetening”.
I think a lot of the problem here is that these authors are discussing what one could do when one sits down for the first time and tries to grapple with a problem. In those cases there’s so many undefined features of the problem that it really does seem impossible and you really are clueless.
But that’s not the same as saying that, with sufficient time, you can’t put probability distributions to everything that’s relevant and try to work out the joint probability.
----
Schoenfield, M. Chilling out on epistemic rationality. Philos Stud 158, 197–219 (2012).
Hi Ben.
One should update towards a higher chance of Smith having commited the crime. However, if one was around 50 % confident that Smith commited the crime before the update, an update much smaller than 50 pp will still leave one around 50 % confident that Smith commited the crime. However, the best guess for the probability that Smitth commited the crime should still go up as a result of the update. If the contribution of an additional eyewitness feels completely irrelevant, one could subjectively estimate the update from “update for N additional eyewitnesses”/N. This will not feel completely irrelevant for a sufficiently large N, unless one considers all eyewitnesses testifying that Smith commited the crime no evidence at all.
I agree.
Some questions here are whether 50-50 as precise probabilities to start is reasonable and whether the approach to assign 50-50 as precise probabilities is reasonable.
If, when looking at the scenario, you would have done something like “wow, that’s so complicated and I’m clueless, so 50-50”, then your reaction almost certainly would have been the same if the example originally included one extra eyewitness in favour of one side. But then this tells you your initial way to assign credences was insensitive to this small difference. And yet after the initial assignment, you say it should be sensitive.
Or, if you forgot your initial judgement or the number of eyewitnesses and was just given the total and looked at the situation with fresh eyes, you’d come up with 50-50 again.
Alternatively, you could build a precise probability distribution as a function of the evidence that weighs it all, but this would be very sensitive to arbitrary choices.
I could report 50 % for 68 and 69 eyewitnesses, but this does not necessarily imply I am insensitive to small changes in the number of eyewitnesses. In practice, I would be reporting my best guess rounded to the closest multiple of 0.1 or so. So I believe the reported value being exactly the same would only mean my best guesses differ by less than 10 pp, not that they are exactly the same. I would say the mean of the (rounded) reported best guesses for a given number of eyewitnesses tends to the (precise) underlying best guess as the number of reports increases. If I could hypothetically encounter the question in practically the same situation for 1 M times, I could easily see the mean of my reported values for 68 and 69 eyewitnesses being different.
If I asked you to actually decide who’s more likely to be the culprit, how would you do it?
What do you do if you don’t have reference class information for each part of the problem? How do you weigh the conflicting evidence? I’m imaginging that at many steps, you’d have to rely on direct impressions or numbers that just came to mind.
Would you feel like whatever came out was very arbitrary and depended too much on direct impressions or numbers that just came to mind? Would you actually believe and endorse what came out? Would you defend it to other people?
What I would actually do depends a lot on the situation, but I have a hard time imagining scenarios where it matters whether the probability of Jones having commited the crime is 40 % or 60 %. So I might not even try to decrease the uncertainty about this, and just focus on other considerations. What would maximise the impact of my future donations and work? What information would I have about Jones and Smith? Who would have the greater potential to contribute to a better world? How much time would I have to decide? Would I be accountable in some way for my decision? If so, how would my decision be assessed? What would be the potential consequences of people concluding I made a good or bad decision? How were decisions like mine assessed in the past?
Do you (Michael) see your views about precise and imprecise credences significantly affecting what you would actually do in the real world in a scenario where you had to blame Jones or Smith? Considerations like the ones I mentioned above would matter mode? I may be dodging your question, but I am ultimately interested in making better decisions in the real world. So I think it makes sense to discuss precise and imprecise credences in the context of realistic scenarios.
Probably not. I see it as more illustrative of important cases. Imagine instead it’s between supporting an intervention or not, and it has similar complexity and considerations going in each direction.
More relevant examples to us could be: crops vs nature for wild animals, climate change on wild animals, fishing on wild animals, the far future effects of our actions, the acausal influence of our actions. These are all things I feel clueless enough about to mostly bracket away and ignore when they are side effects of direct interventions I’m interested in supporting. I’m not ignoring them because I think they’re small. I think they are likely much larger than the effects I’m not ignoring.
I may also want to further study some of them, but I’m often not that optimistic about making much progress (especially for far future effrcts and acausal influence) and for that progress to be used in a way that isn’t net negative overall by my lights.
How much more optimistic would you be about research on i) the welfare of soil animals and microorganisms, and ii) comparisons of (expected hedonistic) welfare across species if you strongly endorsed expectational total hedonistic utilitarianism, moral realism, and precise probabilitites, and ignored acausal effects, and effects after 100 years?
While browsing types of uncertainties, I stumbled upon the idea of state space uncertainty and conscious unawareness, which sounds similar to your explanation of cluelessness and which might be another helpful angle for people with a more Bayesian perspective.
https://link.springer.com/article/10.1007/s10670-013-9518-4
A good point.
There are things you can do to correct for this sort of thing-for instance, go one level more meta, estimate the probability of unforeseen consequences in general, or within the class of problems that your specific problem fits into.
We couldn’t have predicted the fukushima disaster, but perhaps we can predict related things with some degree of certainty—the average cost and death toll of earthquakes worldwide, for instance. In fact, this is a fairly well explored space, since insurers have to understand the risk of earthquakes.
The ongoing pandemic is a harder example—the rarer the black swan, the more difficult it is to predict. But even then, prior to the 2020 pandemic, the WHO had estimated the amortized costs of pandemics as in the order of 1% of global GDP annually (averaged over years when there are and aren’t pandemics), which seems like a reasonable approximation.
I don’t know how much of a realistic solution that would be in practice.
This is a great example, thanks for sharing!
I think the example Ben cites in his reply is very illustrative.
You might feel that you can’t justify your one specific choice of prior over another prior, so that particular choice is arbitrary, and then what you should do could depend on this arbitrary choice, whereas an equally reasonable prior would recommend a different decision. Someone else could have exactly the same information as you, but due to a different psychology, or just different patterns of neurons firing, come up with a different prior that ends up recommending a different decision. Choosing one prior over another without reason seems like a whim or a bias, and potentially especially prone to systematic error.
It seems bad if we’re basing how to do the most good on whims and biases.
If you’re lucky enough to have only finitely many equally reasonable priors, then I think it does make sense to just use a uniform meta-prior over them, i.e. just take their average. This doesn’t seem to work with infinitely many priors, since you could use different parametrizations to represent the same continuous family of distributions, with a different uniform distribution and therefore average for each parametrization. You’d have to justify your choice of parametrization!
As another example, imagine you have a coin that someone (who is trustworthy) has told you is biased towards heads, but they haven’t given you any hint how much, and you want to come up with a probability distribution for the fraction of heads over 1,000,000 flips. So, you want a distribution over the interval [0, 1]. Which distribution would you use? Say you give me a probability density function f. Why not (1−p)f(x)+p for some p∈(0,1)? Why not 1∫10f(xp)dxf(xp) for some p>0? If f is a weighted average of multiple distributions, why not apply one of these transformations to one of the component distributions and choose the resulting weighted average instead? Why the particular weights you’ve chosen and not slightly different ones?
I think you just have to make your distribution uninformative enough that reasonable differences in the weights don’t change your overall conclusion. If they do, then I would concede that the solution to your specific question really is clueless. Otherwise, you can probably find a response.
Rather than thinking of directly of appropriate distribution for the 1,000,000 flips, I’d think of a distribution to model p itself. Then you can run simulations based on the distribution of p to calculate the distribution of the fraction of 1000,000 flips. p∈(0.5,1.0], and then we need to select a distribution for p over that range.
There is no one correct probability distribution for p because any probability is just an expression of our belief, so you may use whatever probability distribution genuinely reflects your prior belief. A uniform distribution is a reasonable start. Perhaps you really are clueless about p, in which case, yes, there’s a certain amount of subjectivity about your choice. But prior beliefs are always inherently subjective, because they simply describe your belief about the state of the world as you know it now. The fact you might have to select a distribution, or set of distributions with some weighted average, is merely an expression of your uncertainty. This in itself, I think, doesn’t stop you from trying to estimate the result.
I think this expresses within Bayesian terms the philosophical idea that we can only make moral choices based on information available at the time; one can’t be held morally responsible for mistakes made on the basis of the information we didn’t have.
Perhaps you disagree with me that a uniform distribution is the best choice. You reason thus: “we have some idea about the properties of coins in general. It’s difficult to make a coin that is 100% biased towards heads. So that seems unlikely”. So we could pick a distribution that better reflects your prior belief. Perhaps a suitable choice might be Beta(2,2) with a truncation at 0.5, which will give the greatest likelihood of p just above 0.5, and a declining likelihood down to 1.0.
Maybe you and i just can’t agree after all that there is still no consistent and reasonable prior choice you can make, and not even any compromise. And let’s say we both run simulations using our own priors and find entirely different results and we can’t agree on any suitable weighting between them. In that case, yes, I can see you have cluelessness. I don’t think it follows that, if we went through the same process for estimating the longtermist moral worth of malaria bednet distribution, we must have intractable complex cluelessness about specific problems like malaria bednet distribution. I think I can admit that perhaps, right now, in our current belief state, we are genuinely clueless, but it seems that there is some work that can be done that might eliminate the cluelessness.
Hi Michael.
I agree. However, in cases where priors are playing a crucial role, one should simply prioritise gathering more evidence until there is reasonable convergence about what to do (among a given group of people, for a particular decision)?
In some cases, we can’t gather strong enough evidence, say because:
they’re questions about very speculative or unprecedented possibilities and the evidence would either be too indirect and weak or come too late to be very action-guiding, e.g. often for AI risk, conscious subsystems, or
there will be too much noise or confounding, too small a sample size and anything like an RCT is too impractical (e.g. policy, corporate outreach) or wouldn’t generalize well, or
the disagreements are partly conceptual, definitional or philosophical, e.g. “What is consciousness?”, “What is the hedonic intensity of an experience?”
EDIT: generally, the window to intervene is too small to wait for the evidence.
In such cases, I think imprecise probabilities are the way to go to reduce arbitrariness. We can do sensitivity analysis. If whether the intervention looks good or bad overall depends highly on fairly arbitrary judgements or priors, we might disprefer it and prefer to support things that are more robustly positive. This is difference-making ambiguity aversion.
And/or we do can some kind of bracketing.
Also, you should think of research as an intervention itself that could backfire. Who could use the research, and could they use it in ways you’d judge as very negative? How likely is that? This will of course depend on the case and your own specific views.
The reasons you mentioned for gathering strong evidence not being possible (or being very difficult) apply to some extent to efforts increasing human welfare, but humans have probably still made progress on increasing human welfare over the past 200 years or so? Can one be confident similar progress cannot be extended to non-humans?
I agree research can backfire. However, at least historically, doing research on the sentience of animals, and on how to increase their welfare has mostly been beneficial for the target animals?