I think a fair bit might come down to what we mean by âjudgement callsâ.
Letâs take an example of predicting who would win the US 2024 presidential election. Reasonable, well informed people can and did disagree about what the fair market price for such prediction contracts were. There are many important reasons on either side. If two people were perfect rationalist Bayesians, they would pool their collective evidence (including hard-to-explain intuitions) and both end up with the same joint probability estimate.
So to take it back to your example, maybe Alice and I are both reasonable people and after discussing thoroughly both update towards each other. But I donât see why we would need to end up at 50%. I suppose if by judgement call we mean âthere is room for reasonable disagreementâ then I agree with you, but if we mean the far stronger ârational predictors should be at 50% on the questionâ that seems unwarranted. And it seems to me for cluelessness to bind, we need the strong 50% version? As otherwise we can just act on the balance of probabilities, while also trying to gain more relevant information.
Letâs imprecisely interpret judgment calls as âhard-to-explain intuitionsâ as you wrote, for simplicity. I think thatâs enough, here.
For the US 2024 presidential election, there are definitely such judgment calls involved. If one tries to make an evolutionary argument undermining our ability to predict US 2024 presidential election, P1 holds. P2 visibly doesnât however, at least for some good predictors. There is empirical evidence against P2. And presumably, the reason why P2 doesnât hold is that people who have decent hard-to-explain intuitions vis-a-vis âwhere the wind blowsâ in such socio-political contexts survived better. The same canât be said (at least, not obviously) for forecasting whether making altruistic people more longtermists does more good than harm, considering all the consequences on everything from now until the end of time.
> But I donât see why we would need to end up at 50%
Say you say 53% and Alice says 45%. The two of you can give me all the arguments you want. At the end of the day, you both undeniably made judgment calls when weighing the reasons to believe making altruistic people more longtermists does more good than harm, all things considered, and reasons to believe the opposite (including reasons, in both cases, that have to do with aliens, acausal reasoning, and how to deal with crucial unknown unknowns). I donât see why I should trust any of your two different judgment-cally âbest guessesâ any more than the other.
In fact, if I canât find a good objection to P2, I have no good reason to trust any of your best guesses any more than a dart-throwing chimp. If I had an opinion on the (dis)value of making altruistic people more longtermists without having a good reason to reject P2, Iâd be blatantly inconsistent. [1]
Do you agree now that weâve hopefully clarified what is a judgment call and what isnât, here? (I think P2 is definitely the crux for whether we should be clueless. Defending that we can identify positive longtermist causes without resorting to any sort of hard-to-explain intuitions seems really untenable. And I think there may be better objections to P2 than the ones I address in the post.)
[1] Btw, a bit tangential but a key popular assumption/âfinding in the literature on decision-making under deep uncertainty is that ânot having an opinionâ or âsuspending judgmentâ =/â= 50% credenceâsee this post from DiGiovanni for a nice overview).
So if we take as given that I am at 53% and Alice is at 45% that gives me some reason to do longtermist outreach, and gives Alice some reason to try to stop me, perhaps by making moral trades with me that get more of what we both value. In this case, cluelessness doesnât bite as Alice and I are still taking action towards our longtermist ends.
However, I think what you are claiming, or at least the version of your position that makes most sense to me, is that both Alice and I would be making a failure of reasoning if we assign these specific credence, and that we should both be âsuspending judgementâ. And if I grant that, then yes it seems cluelessness bites as neither Alice or I know at all what to do now.
So it seems to come down to whether we should be precise Bayesians.
Re judgment calls, yes I think that makes sense, though Iâm not sure it is such a useful category. I would think there is just some spectrum of arguments/âpieces of evidence from âvery well empirically grounded and justifiedâ through âwe have some moderate reason to think soâ to âwe have roughly no ideaâ and I think towards the far right of this spectrum is what we are labeling judgement calls. But surely there isnât a clear cut-off point.
I think a fair bit might come down to what we mean by âjudgement callsâ.
Letâs take an example of predicting who would win the US 2024 presidential election. Reasonable, well informed people can and did disagree about what the fair market price for such prediction contracts were. There are many important reasons on either side. If two people were perfect rationalist Bayesians, they would pool their collective evidence (including hard-to-explain intuitions) and both end up with the same joint probability estimate.
So to take it back to your example, maybe Alice and I are both reasonable people and after discussing thoroughly both update towards each other. But I donât see why we would need to end up at 50%. I suppose if by judgement call we mean âthere is room for reasonable disagreementâ then I agree with you, but if we mean the far stronger ârational predictors should be at 50% on the questionâ that seems unwarranted. And it seems to me for cluelessness to bind, we need the strong 50% version? As otherwise we can just act on the balance of probabilities, while also trying to gain more relevant information.
Letâs imprecisely interpret judgment calls as âhard-to-explain intuitionsâ as you wrote, for simplicity. I think thatâs enough, here.
For the US 2024 presidential election, there are definitely such judgment calls involved. If one tries to make an evolutionary argument undermining our ability to predict US 2024 presidential election, P1 holds. P2 visibly doesnât however, at least for some good predictors. There is empirical evidence against P2. And presumably, the reason why P2 doesnât hold is that people who have decent hard-to-explain intuitions vis-a-vis âwhere the wind blowsâ in such socio-political contexts survived better. The same canât be said (at least, not obviously) for forecasting whether making altruistic people more longtermists does more good than harm, considering all the consequences on everything from now until the end of time.
> But I donât see why we would need to end up at 50%
Say you say 53% and Alice says 45%. The two of you can give me all the arguments you want. At the end of the day, you both undeniably made judgment calls when weighing the reasons to believe making altruistic people more longtermists does more good than harm, all things considered, and reasons to believe the opposite (including reasons, in both cases, that have to do with aliens, acausal reasoning, and how to deal with crucial unknown unknowns). I donât see why I should trust any of your two different judgment-cally âbest guessesâ any more than the other.
In fact, if I canât find a good objection to P2, I have no good reason to trust any of your best guesses any more than a dart-throwing chimp. If I had an opinion on the (dis)value of making altruistic people more longtermists without having a good reason to reject P2, Iâd be blatantly inconsistent. [1]
Do you agree now that weâve hopefully clarified what is a judgment call and what isnât, here? (I think P2 is definitely the crux for whether we should be clueless. Defending that we can identify positive longtermist causes without resorting to any sort of hard-to-explain intuitions seems really untenable. And I think there may be better objections to P2 than the ones I address in the post.)
[1] Btw, a bit tangential but a key popular assumption/âfinding in the literature on decision-making under deep uncertainty is that ânot having an opinionâ or âsuspending judgmentâ =/â= 50% credenceâsee this post from DiGiovanni for a nice overview).
So if we take as given that I am at 53% and Alice is at 45% that gives me some reason to do longtermist outreach, and gives Alice some reason to try to stop me, perhaps by making moral trades with me that get more of what we both value. In this case, cluelessness doesnât bite as Alice and I are still taking action towards our longtermist ends.
However, I think what you are claiming, or at least the version of your position that makes most sense to me, is that both Alice and I would be making a failure of reasoning if we assign these specific credence, and that we should both be âsuspending judgementâ. And if I grant that, then yes it seems cluelessness bites as neither Alice or I know at all what to do now.
So it seems to come down to whether we should be precise Bayesians.
Re judgment calls, yes I think that makes sense, though Iâm not sure it is such a useful category. I would think there is just some spectrum of arguments/âpieces of evidence from âvery well empirically grounded and justifiedâ through âwe have some moderate reason to think soâ to âwe have roughly no ideaâ and I think towards the far right of this spectrum is what we are labeling judgement calls. But surely there isnât a clear cut-off point.