FWIW, I don’t think P1 and P2 together logically imply the conclusion as stated. I think you’re probably leaving out some unstated premises (that might be uncontroversial, but should be checked).
For example, could anything other than evolutionary pressures (direct or indirect) work “against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering how these impact the far future”?
Now you might say no, because our judgement calls are outputs of systems built up through evolution.
But I think an additional premise should capture that. It is not a tautology, but (possibly) an empirical fact.
Another: does making correct judgement calls enough to have warranted beliefs (in humans) about something require any (past) pressure against incorrect judgement calls (about those things in particular, or in domains from which there is enough generalization to the particular things)?
I think you’d say yes, but this is also an empirical claim, not a tautology.
could anything other than evolutionary pressures (direct or indirect) work “against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering how these impact the far future”?
Fair! One could say it’s not evolution but God or something that gave us such ability (or the ability to know we have such ability although for unknown reasons).
Another: does making correct judgement calls enough to have warranted beliefs (in humans) about something require any (past) pressure against incorrect judgement calls (about those things in particular, or in domains from which there is generalization)?
I don’t understand how this differs from your first example. Can you think of a way one could argue for the negative on this? That’d probably help me spot the difference.
The second one is more about the grounds for justification (having warranted beliefs). Maybe judgement calls don’t need to tend to be correct or for there to be the right kind of fit towards calibration for the resulting beliefs to be warranted. Maybe just the fact that something seems a certain way, e.g. even direct intuition about highly speculative things like the far future effects of interventions, can justify belief.
EDIT: This could be consistent with phenomenal conservatism.
Yes, or we don’t need to have any specific reason to believe they do better than random. I think this could be consistent with phenomenal conservatism.
FWIW, I don’t think P1 and P2 together logically imply the conclusion as stated. I think you’re probably leaving out some unstated premises (that might be uncontroversial, but should be checked).
For example, could anything other than evolutionary pressures (direct or indirect) work “against individuals unable to make the correct judgment calls regarding what actions do more good than harm (in expectation) considering how these impact the far future”?
Now you might say no, because our judgement calls are outputs of systems built up through evolution.
But I think an additional premise should capture that. It is not a tautology, but (possibly) an empirical fact.
Another: does making correct judgement calls enough to have warranted beliefs (in humans) about something require any (past) pressure against incorrect judgement calls (about those things in particular, or in domains from which there is enough generalization to the particular things)?
I think you’d say yes, but this is also an empirical claim, not a tautology.
Fair! One could say it’s not evolution but God or something that gave us such ability (or the ability to know we have such ability although for unknown reasons).
I don’t understand how this differs from your first example. Can you think of a way one could argue for the negative on this? That’d probably help me spot the difference.
The second one is more about the grounds for justification (having warranted beliefs). Maybe judgement calls don’t need to tend to be correct or for there to be the right kind of fit towards calibration for the resulting beliefs to be warranted. Maybe just the fact that something seems a certain way, e.g. even direct intuition about highly speculative things like the far future effects of interventions, can justify belief.
EDIT: This could be consistent with phenomenal conservatism.
Like maybe your beliefs don’t need to track the truth better than random to be warranted? Fair. I was also implicitly assuming not that.
Yes, or we don’t need to have any specific reason to believe they do better than random. I think this could be consistent with phenomenal conservatism.