As far as I understand it, cluelessness arises because, as we don’t have sufficient evidence, we’re very unsure about what our credence should be, to the point they feel -or maybe just are- arbitrary. In this case, you could still just carry out the expected value calculation and opt to do the most choice worthy action as you suggest. However, it seems unsatisfying because the credence function you use is arbitrary. Indeed, given your level of evidence, you could very well have opted for another set of beliefs that would have lead you to act differently.
Thus, one might argue that in order to be rational in this type of predicament, you have to consider several probability functions that are consistent with the evidence you have. In other words, you are required to have “imprecise credences” because you cannot determine in a principled manner which probability function you should use.
As Hilary Greaves herself points out in the podcast I mentioned above, if you’re not troubled by this, and you’re by yourself, you can just compute the expected value, but issues can arise when you try to coordinate with other agents that have different arbitrary beliefs. This is why it might be important to take cluelessness seriously.
Her choice to use multiple, independent probability functions itself seems arbitrary to me, although I’ve done more reading since posting the above and have started to understand why there is a predicament.
Instead of multiple independent probability functions, you could start with a set of probability distributions for each of the items you are uncertain about, and then calculate the joint probability distribution by combining all of those distributions. That’ll give you a single probability density function on which you can base your decision.
If you start with a set of several probability functions, with each representing a set of beliefs, then calculating their joint probability would require sampling randomly from each function according to some distribution specifying how likely each of the functions are. It can be done, with the proviso that you must have a probability distribution specifying the relative likelihood of each of the functions in your set.
However, I do worry the same problem arises in this approach in a different form. If you really do have no information about the probability of some event, then in Bayesian terms, your prior probability distribution is one that is completely uninformative. You might need to use an improper prior, and in that case, they can be difficult to update on in some circumstances. I think these are a Bayesian, mathematical representation of what Greaves calls an “imprecise credence”.
But I think the good news is that many times, your priors are not so imprecise that you can’t assign some probability distribution, even if it is incredibly vague. So there may end up not being too many problems where we can’t calculate expected long-term consequences for actions.
I do remain worrying, with Greaves, that GiveWell’s approach of assessing direct impact for each of its potential causes is woefully insufficient. Instead, we need to calculate out the very long term impact of each cause, and because of the value of the long-term future, anything that affects the probability of existential risk, even by an infinitesimal amount, will dominate the expected value of our intervention.
And I worry that this sort of approach could end up being extremely counterintuitive. It might lead us to the conclusion that promoting fertility by any means necessary is positive, or equally likely, to the conclusion that controlling and reducing fertility by any means necessary is positive. These things could lead us to want to implement extremely coercive measures, like banning abortion or mandating abortion depending on what we want the population size to be. Individual autonomy seems to fade away because it just doesn’t have comparable value. Individual autonomy could only be saved if we think it would lead to a safer and more stable society in the long run, and that’s extremely unclear.
And I think I reach the same conclusion that I think Greaves has, that one of the most valuable things you can do right now is to estimate some of the various contingencies, in order to lower the uncertainty and imprecision on various probability estimates. That’ll raise the expected value of your choice because it is much less likely to be the wrong one.
Her choice to use multiple, independent probability functions itself seems arbitrary to me,...
I’m not sure what makes you think that. Prof. Greaves does state that rational agents may be required “to include all such equally-recommended credence functions in their representor”. This feels a lot less arbitrary that deciding to pick a single prior among all those available and decide to compute the expected value of your actions based on it.
Instead of multiple independent probability functions, you could start with a set of probability distributions for each of the items you are uncertain about, and then calculate the joint probability distribution by combining all of those distributions. That’ll give you a single probability density function on which you can base your decision.
I agree that you could do that, but it seems even more arbitrary! If you think that choosing a set of probability functions was arbitrary, then having a meta-probability distribution over your probability distributions seems even more arbitrary, unless I’m missing something. It doesn’t seem to me like the kind of situations where going meta helps: intuitively, if someone is very unsure about what prior to use in the first place, they should also probably be unsure about coming up with a second-order probability distribution over their set of priors .
You might need to use an improper prior, and in that case, they can be difficult to update on in some circumstances. I think these are a Bayesian, mathematical representation of what Greaves calls an “imprecise credence”.
I do not think that’s what Prof. Greaves mean when she says “imprecise credence”. This article for the Stanford Encyclopedia of Philosophy explains the meaning of that phrase for philosophers. It also explains what a representor is in a better way that I did.
But I think the good news is that many times, your priors are not so imprecise that you can’t assign some probability distribution, even if it is incredibly vague. So there may end up not being too many problems where we can’t calculate expected long-term consequences for actions.
I think Prof. Greaves and Philip Trammel would disagree with that, which is why they’re talking about cluelessness. For instance, Phil writes:
Perhaps there is some sense in which my credences should be sharp (see e.g. Elga (2010)), but the inescapable fact is that they are not. There are obviously some objects that do not have expected values for the act of giving to Malaria Consortium. The mug on my desk right now is one of them. Upon immediately encountering the above problem, my brain is like the mug: just another object that does not have an expected value for the act of giving to Malaria Consortium. Nor is there any reason to think that an expected value must “really be there”, deep down, lurking in my subconscious. Lots of theorists, going back at least to Knight’s (1921) famous distinction between “risk” and “uncertainty”, have recognized this.
It does, thanks—at least, we’re clarifying where the disagreements are.
If you think that choosing a set of probability functions was arbitrary, then having a meta-probability distribution over your probability distributions seems even more arbitrary, unless I’m missing something. It doesn’t seem to me like the kind of situations where going meta helps: intuitively, if someone is very unsure about what prior to use in the first place, they should also probably be unsure about coming up with a second-order probability distribution over their set of priors .
All you need to do to come up with that meta-probability distribution is to have some information about the relative value of each item in your set of probability functions. If our conclusion for a particular dilemma turns on a disagreement between virtue ethics, utilitarian ethics, and deontological ethics, this is a difficult problem that people will disagree strongly on. But can you even agree that these each bound, say, to be between 1% and 99% likely to be the correct moral theory? If so, you have a slightly informative prior and there is a possibility you can make progress. If we really have completely no idea, then I agree, the situation really is entirely clueless. But I think with extended consideration, many reasonable people might be able to come to an agreement.
Upon immediately encountering the above problem, my brain is like the mug: just another object that does not have an expected value for the act of giving to Malaria Consortium. Nor is there any reason to think that an expected value must “really be there”, deep down, lurking in my subconscious.
I agree with this. If the question is, “can anyone, at any moment in time, give a sensible probability distribution for any question”, then I agree the answer is “no”.
But with some time, I think you can assign a sensible probability distribution to many difficult-to-estimate things that are not completely arbitrary nor completely uninformative. So, specifically, while I can’t tell you right now about the expected long-run value for giving to Malaria Consortium, I think I might be able to spend a year or so understanding the relationship between giving to Malaria Consortium and long-run aggregate sentient happiness, and that might help me to come up with a reasonable estimate of the distribution of values.
We’d still be left with a case where, very counterintuitively, the actual act of saving lives is mostly only incidental to the real value of giving to Malaria Consortium, but it seems to me we can probably find a value estimate.
About this, Greaves (2016) says,
averting child deaths has longer-run effects on population size: both because the children in question will (statistically) themselves go on to have children, and because a reduction in the child mortality rate has systematic, although difficult to estimate, effects on the near-future fertility rate. Assuming for the sake of argument that the net effect of averting child deaths is to increase population size, the arguments concerning whether this is a positive, neutral or a negative thing are complex.
And I wholeheartedly agree, but it doesn’t follow from the fact you can’t immediately form an opinion about it that you can’t, with much research, make an informed estimate that has better than an entirely indeterminate or undefined value.
EDIT: I haven’t heard Greaves’ most recent podcast on the topic, so I’ll check that out and see if I can make any progress there.
EDIT 2: I read the transcript to the podcast that you suggested, and I don’t think it really changes my confidence that estimating a Bayesian joint probability distribution could get you past cluelessness.
So you can easily imagine that getting just a little bit of extra information would massively change your credences. And there, it might be that here’s why we feel so uncomfortable with making what feels like a high-stakes decision on the basis of really non-robust credences, is because what we really want to do is some third thing that wasn’t given to us on the menu of options. We want to do more thinking or more research first, and then decide the first-order question afterwards.
Hilary Greaves: So that’s a line of thought that was investigated by Amanda Askell in a piece that she wrote on cluelessness. I think that’s a pretty plausible hypothesis too. I do feel like it doesn’t really… It’s not really going to make the problem go away because it feels like for some of the subject matters we’re talking about, even given all the evidence gathering I could do in my lifetime, it’s patently obvious that the situation is not going to be resolved.
My reaction to that (beyond I should read Askell’s piece) is that I disagree with Greaves that a lifetime of research could resolve the subject matter for something like giving to Malaria Consortium. I think it’s quite possible one could make enough progress to arrive at an informative probability distribution. And perhaps it only says “across the probability distribution, there’s a 52% likelihood that giving to x charity is good and a 48% probability that it’s bad”, but actually, if the expected value is pretty high, it’s still strong impetus to give to x charity.
I still reach the problem where we’ve arrived at a framework where our choices for short-term interventions are probably going to be dominated by their long-run effects, and that’s extremely counterintuitive, but at least I have some indication.
Hey!
I think Hilary Greaves does a great job at explaining what cluelessness in non-jargon terms in her most recent appearance on 80K podcast.
As far as I understand it, cluelessness arises because, as we don’t have sufficient evidence, we’re very unsure about what our credence should be, to the point they feel -or maybe just are- arbitrary. In this case, you could still just carry out the expected value calculation and opt to do the most choice worthy action as you suggest. However, it seems unsatisfying because the credence function you use is arbitrary. Indeed, given your level of evidence, you could very well have opted for another set of beliefs that would have lead you to act differently.
Thus, one might argue that in order to be rational in this type of predicament, you have to consider several probability functions that are consistent with the evidence you have. In other words, you are required to have “imprecise credences” because you cannot determine in a principled manner which probability function you should use.
As Hilary Greaves herself points out in the podcast I mentioned above, if you’re not troubled by this, and you’re by yourself, you can just compute the expected value, but issues can arise when you try to coordinate with other agents that have different arbitrary beliefs. This is why it might be important to take cluelessness seriously.
I hope this helps!
Her choice to use multiple, independent probability functions itself seems arbitrary to me, although I’ve done more reading since posting the above and have started to understand why there is a predicament.
Instead of multiple independent probability functions, you could start with a set of probability distributions for each of the items you are uncertain about, and then calculate the joint probability distribution by combining all of those distributions. That’ll give you a single probability density function on which you can base your decision.
If you start with a set of several probability functions, with each representing a set of beliefs, then calculating their joint probability would require sampling randomly from each function according to some distribution specifying how likely each of the functions are. It can be done, with the proviso that you must have a probability distribution specifying the relative likelihood of each of the functions in your set.
However, I do worry the same problem arises in this approach in a different form. If you really do have no information about the probability of some event, then in Bayesian terms, your prior probability distribution is one that is completely uninformative. You might need to use an improper prior, and in that case, they can be difficult to update on in some circumstances. I think these are a Bayesian, mathematical representation of what Greaves calls an “imprecise credence”.
But I think the good news is that many times, your priors are not so imprecise that you can’t assign some probability distribution, even if it is incredibly vague. So there may end up not being too many problems where we can’t calculate expected long-term consequences for actions.
I do remain worrying, with Greaves, that GiveWell’s approach of assessing direct impact for each of its potential causes is woefully insufficient. Instead, we need to calculate out the very long term impact of each cause, and because of the value of the long-term future, anything that affects the probability of existential risk, even by an infinitesimal amount, will dominate the expected value of our intervention.
And I worry that this sort of approach could end up being extremely counterintuitive. It might lead us to the conclusion that promoting fertility by any means necessary is positive, or equally likely, to the conclusion that controlling and reducing fertility by any means necessary is positive. These things could lead us to want to implement extremely coercive measures, like banning abortion or mandating abortion depending on what we want the population size to be. Individual autonomy seems to fade away because it just doesn’t have comparable value. Individual autonomy could only be saved if we think it would lead to a safer and more stable society in the long run, and that’s extremely unclear.
And I think I reach the same conclusion that I think Greaves has, that one of the most valuable things you can do right now is to estimate some of the various contingencies, in order to lower the uncertainty and imprecision on various probability estimates. That’ll raise the expected value of your choice because it is much less likely to be the wrong one.
I’m not sure what makes you think that. Prof. Greaves does state that rational agents may be required “to include all such equally-recommended credence functions in their representor”. This feels a lot less arbitrary that deciding to pick a single prior among all those available and decide to compute the expected value of your actions based on it.
I agree that you could do that, but it seems even more arbitrary! If you think that choosing a set of probability functions was arbitrary, then having a meta-probability distribution over your probability distributions seems even more arbitrary, unless I’m missing something. It doesn’t seem to me like the kind of situations where going meta helps: intuitively, if someone is very unsure about what prior to use in the first place, they should also probably be unsure about coming up with a second-order probability distribution over their set of priors .
I do not think that’s what Prof. Greaves mean when she says “imprecise credence”. This article for the Stanford Encyclopedia of Philosophy explains the meaning of that phrase for philosophers. It also explains what a representor is in a better way that I did.
I think Prof. Greaves and Philip Trammel would disagree with that, which is why they’re talking about cluelessness. For instance, Phil writes:
Hope this helps.
> Hope this helps.
It does, thanks—at least, we’re clarifying where the disagreements are.
All you need to do to come up with that meta-probability distribution is to have some information about the relative value of each item in your set of probability functions. If our conclusion for a particular dilemma turns on a disagreement between virtue ethics, utilitarian ethics, and deontological ethics, this is a difficult problem that people will disagree strongly on. But can you even agree that these each bound, say, to be between 1% and 99% likely to be the correct moral theory? If so, you have a slightly informative prior and there is a possibility you can make progress. If we really have completely no idea, then I agree, the situation really is entirely clueless. But I think with extended consideration, many reasonable people might be able to come to an agreement.
I agree with this. If the question is, “can anyone, at any moment in time, give a sensible probability distribution for any question”, then I agree the answer is “no”.
But with some time, I think you can assign a sensible probability distribution to many difficult-to-estimate things that are not completely arbitrary nor completely uninformative. So, specifically, while I can’t tell you right now about the expected long-run value for giving to Malaria Consortium, I think I might be able to spend a year or so understanding the relationship between giving to Malaria Consortium and long-run aggregate sentient happiness, and that might help me to come up with a reasonable estimate of the distribution of values.
We’d still be left with a case where, very counterintuitively, the actual act of saving lives is mostly only incidental to the real value of giving to Malaria Consortium, but it seems to me we can probably find a value estimate.
About this, Greaves (2016) says,
And I wholeheartedly agree, but it doesn’t follow from the fact you can’t immediately form an opinion about it that you can’t, with much research, make an informed estimate that has better than an entirely indeterminate or undefined value.
EDIT: I haven’t heard Greaves’ most recent podcast on the topic, so I’ll check that out and see if I can make any progress there.
EDIT 2: I read the transcript to the podcast that you suggested, and I don’t think it really changes my confidence that estimating a Bayesian joint probability distribution could get you past cluelessness.
My reaction to that (beyond I should read Askell’s piece) is that I disagree with Greaves that a lifetime of research could resolve the subject matter for something like giving to Malaria Consortium. I think it’s quite possible one could make enough progress to arrive at an informative probability distribution. And perhaps it only says “across the probability distribution, there’s a 52% likelihood that giving to x charity is good and a 48% probability that it’s bad”, but actually, if the expected value is pretty high, it’s still strong impetus to give to x charity.
I still reach the problem where we’ve arrived at a framework where our choices for short-term interventions are probably going to be dominated by their long-run effects, and that’s extremely counterintuitive, but at least I have some indication.