I can’t conveniently assume good and bad unknown unknowns ‘cancel out’
FWIW, my take would be:
No, we shouldn’t assume that they “cancel out”
However, as a structural fact[*] about the world, the prevalence of good and bad unknown unknowns are correlated with the good and bad knowns (and known unknowns)
So, on average and in expectation, things will point in the same direction as the analysis ignoring cluelessness (although it’s worth being conscious that this will turn out wrong in a significant fraction of cases ― probably approaching 50% for something like cats vs dogs)
Of course this relies heavily on the “fact” I denoted as [*], but really I’m saying “I hypothesise this to be a fact”. My reasons for believing it are something like:
Some handwavey argument along these lines:
Among the many complex things we could consider, they will vary in the proportion of considerations that point in a good direction
If our knowledge sampled randomly from the available considerations, we would expect this correlation
It’s too much to expect our knowledge to sample randomly ― there will surely sometimes be structural biases ― but there’s no reason to expect the deviations to be so perverse as to (on average) actively mislead
(this needn’t preclude the existence of some domains with such a perverse pattern, but I’d want a positive argument that something might be such a domain)
Given that we shouldn’t expect the good and bad unknown unknowns to cancel out, by default we should expect them to correlate with the knowns
A sense that empirically this kind of correlation is true in less clueless-like situations
e.g. if I uncover a new considerations about whether it’s good or bad for EAs to steal-to-give, it’s more likely to point to “bad” than “good”
Combined with something like a simplicity prior ― if this effect exists for things where we have a fairly strong sense of the considerations we can track, by default I’d expect it to exist in weaker form for things where we have a weaker sense of the considerations we can track (rather than being non-existent or occurring in a perverse form)
In principle, this could be tested experimentally. In practice, you’re going to be chasing after tiny effect sizes with messy setups, so I don’t think it’s viable any time soon for human judgement. I do think you might hope to one day run experiments along these lines for AI systems. Of course they would have to be cases where we have some access to the ground truth, but the AI is pretty clueless—perhaps something like getting non-superintelligent AI systems to predict outcomes in a complex simulated world.
Thanks a lot for developing on that! To confirm whether we’ve identified at least one of the cruxes, I’d be curious to know what you think of what follows.
Say I am clueless about the (dis)value of the alien counterfactual we should expect (i.e., whether another civ someday replacing our own after we go extinct or something would be better or worse than if it was ours maintaining control over our corner of the Universe). One consideration I have identified is that there is, all else equal, a selection effect against caring about suffering for grabby civs. But all else is ofc not equal and there might be plenty of considerations I haven’t thought of and/or never will be aware of supporting the opposite or other relevant considerations that have nothing to do with care for suffering. I’m clueless. By, ‘I’m clueless’, I don’t mean ‘I have a 50% credence the alien counterfactual is better’. Instead, I mean ‘my credence is severely indeterminate/imprecise, such that I can’t compute the expected value of reducing X-risks (unless I decide to give up on impartial consequentialism and ignore things like the alien counterfactual which I’m clueless about)’ (for a case for how cluelessness threatens expected value reasoning in such a way, see e.g. Mogensen 2021).
Your above argument is based on the assumption that our credences all ought to be determinate/precise and that cluelessness = 50% credence, right? It’s probably not worth discussing further in here whether this assumption is justified but do you also think that’s one of the cruxes, here?
I think this is at least in the vicinity of a crux?
My immediate thoughts (I’d welcome hearing about issues with these views!):
I don’t think our credences all ought to be determinate/precise
But I’ve also never been satisfied with any account I’ve seen of indeterminate/imprecise credences
(though noting that there’s a large literature there and I’ve only seen a tiny fraction of it)
My view would be something more like:
As boundedly rational actors, it makes sense for a lot of our probabilities to be imprecise
But this isn’t a fundamental indeterminacy — rather, it’s a view that it’s often not worth expending the cognition to make them more precise
By thinking longer about things, we can get the probabilities to be more precise (in the limit converging on some precise probability)
At any moment, we have credence (itself kind of imprecise absent further thought) about where our probabilities will end up with further thought
What’s the point of tracking all these imprecise credences rather than just single precise best-guesses?
It helps to keep tabs on where more thinking might be helpful, as well as where you might easily be wrong about something
On this perspective, cluelessness = inability to get the current best guess point estimate of where we’d end up to deviate from 50% by expending more thought
Oh my bad. I don’t think it’s really a crux, then. Or not the most key one at least. I guess I can’t narrow it down to more precise than whether your “fact[*]” is true, in that case. And it looks like I misunderstood the assumptions behind your justification of it.
I’ll brush upon my little knowledge of the literature on unawareness—maybe dive deeper—and see to what extent your “fact[*]” was already discussed. I’m sure it was. Then, I’ll go back to your justification of it to see if I understand it better and whether I actually can say I disagree.
Just on this point:
FWIW, my take would be:
No, we shouldn’t assume that they “cancel out”
However, as a structural fact[*] about the world, the prevalence of good and bad unknown unknowns are correlated with the good and bad knowns (and known unknowns)
So, on average and in expectation, things will point in the same direction as the analysis ignoring cluelessness (although it’s worth being conscious that this will turn out wrong in a significant fraction of cases ― probably approaching 50% for something like cats vs dogs)
Of course this relies heavily on the “fact” I denoted as [*], but really I’m saying “I hypothesise this to be a fact”. My reasons for believing it are something like:
Some handwavey argument along these lines:
Among the many complex things we could consider, they will vary in the proportion of considerations that point in a good direction
If our knowledge sampled randomly from the available considerations, we would expect this correlation
It’s too much to expect our knowledge to sample randomly ― there will surely sometimes be structural biases ― but there’s no reason to expect the deviations to be so perverse as to (on average) actively mislead
(this needn’t preclude the existence of some domains with such a perverse pattern, but I’d want a positive argument that something might be such a domain)
Given that we shouldn’t expect the good and bad unknown unknowns to cancel out, by default we should expect them to correlate with the knowns
A sense that empirically this kind of correlation is true in less clueless-like situations
e.g. if I uncover a new considerations about whether it’s good or bad for EAs to steal-to-give, it’s more likely to point to “bad” than “good”
Combined with something like a simplicity prior ― if this effect exists for things where we have a fairly strong sense of the considerations we can track, by default I’d expect it to exist in weaker form for things where we have a weaker sense of the considerations we can track (rather than being non-existent or occurring in a perverse form)
In principle, this could be tested experimentally. In practice, you’re going to be chasing after tiny effect sizes with messy setups, so I don’t think it’s viable any time soon for human judgement. I do think you might hope to one day run experiments along these lines for AI systems. Of course they would have to be cases where we have some access to the ground truth, but the AI is pretty clueless—perhaps something like getting non-superintelligent AI systems to predict outcomes in a complex simulated world.
Thanks a lot for developing on that! To confirm whether we’ve identified at least one of the cruxes, I’d be curious to know what you think of what follows.
Say I am clueless about the (dis)value of the alien counterfactual we should expect (i.e., whether another civ someday replacing our own after we go extinct or something would be better or worse than if it was ours maintaining control over our corner of the Universe). One consideration I have identified is that there is, all else equal, a selection effect against caring about suffering for grabby civs. But all else is ofc not equal and there might be plenty of considerations I haven’t thought of and/or never will be aware of supporting the opposite or other relevant considerations that have nothing to do with care for suffering. I’m clueless. By, ‘I’m clueless’, I don’t mean ‘I have a 50% credence the alien counterfactual is better’. Instead, I mean ‘my credence is severely indeterminate/imprecise, such that I can’t compute the expected value of reducing X-risks (unless I decide to give up on impartial consequentialism and ignore things like the alien counterfactual which I’m clueless about)’ (for a case for how cluelessness threatens expected value reasoning in such a way, see e.g. Mogensen 2021).
Your above argument is based on the assumption that our credences all ought to be determinate/precise and that cluelessness = 50% credence, right? It’s probably not worth discussing further in here whether this assumption is justified but do you also think that’s one of the cruxes, here?
I think this is at least in the vicinity of a crux?
My immediate thoughts (I’d welcome hearing about issues with these views!):
I don’t think our credences all ought to be determinate/precise
But I’ve also never been satisfied with any account I’ve seen of indeterminate/imprecise credences
(though noting that there’s a large literature there and I’ve only seen a tiny fraction of it)
My view would be something more like:
As boundedly rational actors, it makes sense for a lot of our probabilities to be imprecise
But this isn’t a fundamental indeterminacy — rather, it’s a view that it’s often not worth expending the cognition to make them more precise
By thinking longer about things, we can get the probabilities to be more precise (in the limit converging on some precise probability)
At any moment, we have credence (itself kind of imprecise absent further thought) about where our probabilities will end up with further thought
What’s the point of tracking all these imprecise credences rather than just single precise best-guesses?
It helps to keep tabs on where more thinking might be helpful, as well as where you might easily be wrong about something
On this perspective, cluelessness = inability to get the current best guess point estimate of where we’d end up to deviate from 50% by expending more thought
Oh my bad. I don’t think it’s really a crux, then. Or not the most key one at least. I guess I can’t narrow it down to more precise than whether your “fact[*]” is true, in that case. And it looks like I misunderstood the assumptions behind your justification of it.
I’ll brush upon my little knowledge of the literature on unawareness—maybe dive deeper—and see to what extent your “fact[*]” was already discussed. I’m sure it was. Then, I’ll go back to your justification of it to see if I understand it better and whether I actually can say I disagree.
Thanks for all your thoughts!