my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and it’s the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are ‘forbidden’ (this could well be what the paper tries to do).
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences aren’t ‘forbidden’ in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/probabilities within a model and credence that a modelis correct are is relevant here, for reasons such as:
I think it’s often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
Often when it seems we have extreme credence in a model this just holds “at a certain level of detail”, and if we looked at a richer space of models that makes more fine-grained distinctions we’d say that our credence is distributed over a (potentially very large) family of models.
There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the ‘expected credence’ across models) and being highly confident in an extreme credence;
I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think it’ll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event we’re considering. (E.g. ~all models agree that I wont’t spontaneously die in the next second, or that Santa Clause isn’t going to appear in my bedroom.)
When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which events the credence is extreme.
Taken together (i.e. across events/decisions) your all-things-considered credences might look therefore look “funny” or “inconsistent” (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.
I acknowledge that I’m making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of what’s going on I would need to spell out what exactly I mean by “often” etc. (Because as I said I do agree that these claims don’t always hold!)
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences aren’t ‘forbidden’ in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/probabilities within a model and credence that a model is correct are is relevant here, for reasons such as:
I think it’s often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
Often when it seems we have extreme credence in a model this just holds “at a certain level of detail”, and if we looked at a richer space of models that makes more fine-grained distinctions we’d say that our credence is distributed over a (potentially very large) family of models.
There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the ‘expected credence’ across models) and being highly confident in an extreme credence;
I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think it’ll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event we’re considering. (E.g. ~all models agree that I wont’t spontaneously die in the next second, or that Santa Clause isn’t going to appear in my bedroom.)
When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which events the credence is extreme.
Taken together (i.e. across events/decisions) your all-things-considered credences might look therefore look “funny” or “inconsistent” (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.
I acknowledge that I’m making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of what’s going on I would need to spell out what exactly I mean by “often” etc. (Because as I said I do agree that these claims don’t always hold!)