Let’s imprecisely interpret judgment calls as “hard-to-explain intuitions” as you wrote, for simplicity. I think that’s enough, here.
For the US 2024 presidential election, there are definitely such judgment calls involved. If one tries to make an evolutionary argument undermining our ability to predict US 2024 presidential election, P1 holds. P2 visibly doesn’t however, at least for some good predictors. There is empirical evidence against P2. And presumably, the reason why P2 doesn’t hold is that people who have decent hard-to-explain intuitions vis-a-vis “where the wind blows” in such socio-political contexts survived better. The same can’t be said (at least, not obviously) for forecasting whether making altruistic people more longtermists does more good than harm, considering all the consequences on everything from now until the end of time.
> But I don’t see why we would need to end up at 50%
Say you say 53% and Alice says 45%. The two of you can give me all the arguments you want. At the end of the day, you both undeniably made judgment calls when weighing the reasons to believe making altruistic people more longtermists does more good than harm, all things considered, and reasons to believe the opposite (including reasons, in both cases, that have to do with aliens, acausal reasoning, and how to deal with crucial unknown unknowns). I don’t see why I should trust any of your two different judgment-cally “best guesses” any more than the other.
In fact, if I can’t find a good objection to P2, I have no good reason to trust any of your best guesses any more than a dart-throwing chimp. If I had an opinion on the (dis)value of making altruistic people more longtermists without having a good reason to reject P2, I’d be blatantly inconsistent. [1]
Do you agree now that we’ve hopefully clarified what is a judgment call and what isn’t, here? (I think P2 is definitely the crux for whether we should be clueless. Defending that we can identify positive longtermist causes without resorting to any sort of hard-to-explain intuitions seems really untenable. And I think there may be better objections to P2 than the ones I address in the post.)
[1] Btw, a bit tangential but a key popular assumption/finding in the literature on decision-making under deep uncertainty is that “not having an opinion” or “suspending judgment” =/= 50% credence—see this post from DiGiovanni for a nice overview).
So if we take as given that I am at 53% and Alice is at 45% that gives me some reason to do longtermist outreach, and gives Alice some reason to try to stop me, perhaps by making moral trades with me that get more of what we both value. In this case, cluelessness doesn’t bite as Alice and I are still taking action towards our longtermist ends.
However, I think what you are claiming, or at least the version of your position that makes most sense to me, is that both Alice and I would be making a failure of reasoning if we assign these specific credence, and that we should both be ‘suspending judgement’. And if I grant that, then yes it seems cluelessness bites as neither Alice or I know at all what to do now.
So it seems to come down to whether we should be precise Bayesians.
Re judgment calls, yes I think that makes sense, though I’m not sure it is such a useful category. I would think there is just some spectrum of arguments/pieces of evidence from ‘very well empirically grounded and justified’ through ‘we have some moderate reason to think so’ to ‘we have roughly no idea’ and I think towards the far right of this spectrum is what we are labeling judgement calls. But surely there isn’t a clear cut-off point.
Let’s imprecisely interpret judgment calls as “hard-to-explain intuitions” as you wrote, for simplicity. I think that’s enough, here.
For the US 2024 presidential election, there are definitely such judgment calls involved. If one tries to make an evolutionary argument undermining our ability to predict US 2024 presidential election, P1 holds. P2 visibly doesn’t however, at least for some good predictors. There is empirical evidence against P2. And presumably, the reason why P2 doesn’t hold is that people who have decent hard-to-explain intuitions vis-a-vis “where the wind blows” in such socio-political contexts survived better. The same can’t be said (at least, not obviously) for forecasting whether making altruistic people more longtermists does more good than harm, considering all the consequences on everything from now until the end of time.
> But I don’t see why we would need to end up at 50%
Say you say 53% and Alice says 45%. The two of you can give me all the arguments you want. At the end of the day, you both undeniably made judgment calls when weighing the reasons to believe making altruistic people more longtermists does more good than harm, all things considered, and reasons to believe the opposite (including reasons, in both cases, that have to do with aliens, acausal reasoning, and how to deal with crucial unknown unknowns). I don’t see why I should trust any of your two different judgment-cally “best guesses” any more than the other.
In fact, if I can’t find a good objection to P2, I have no good reason to trust any of your best guesses any more than a dart-throwing chimp. If I had an opinion on the (dis)value of making altruistic people more longtermists without having a good reason to reject P2, I’d be blatantly inconsistent. [1]
Do you agree now that we’ve hopefully clarified what is a judgment call and what isn’t, here? (I think P2 is definitely the crux for whether we should be clueless. Defending that we can identify positive longtermist causes without resorting to any sort of hard-to-explain intuitions seems really untenable. And I think there may be better objections to P2 than the ones I address in the post.)
[1] Btw, a bit tangential but a key popular assumption/finding in the literature on decision-making under deep uncertainty is that “not having an opinion” or “suspending judgment” =/= 50% credence—see this post from DiGiovanni for a nice overview).
So if we take as given that I am at 53% and Alice is at 45% that gives me some reason to do longtermist outreach, and gives Alice some reason to try to stop me, perhaps by making moral trades with me that get more of what we both value. In this case, cluelessness doesn’t bite as Alice and I are still taking action towards our longtermist ends.
However, I think what you are claiming, or at least the version of your position that makes most sense to me, is that both Alice and I would be making a failure of reasoning if we assign these specific credence, and that we should both be ‘suspending judgement’. And if I grant that, then yes it seems cluelessness bites as neither Alice or I know at all what to do now.
So it seems to come down to whether we should be precise Bayesians.
Re judgment calls, yes I think that makes sense, though I’m not sure it is such a useful category. I would think there is just some spectrum of arguments/pieces of evidence from ‘very well empirically grounded and justified’ through ‘we have some moderate reason to think so’ to ‘we have roughly no idea’ and I think towards the far right of this spectrum is what we are labeling judgement calls. But surely there isn’t a clear cut-off point.