No, I’m using the common language meaning. Put it this way: there are seven billion people in the world, and only one of them is the best person to fund (ex ante). If you pick one person, and say “I believe that this is the BEST person to fund, given the information available in 2021”, then there’s a very high chance that you’re wrong, and so this claim isn’t justified. Whereas you can justifiably claim that this person is a very good person to fund.
I guess when I think “best action to do” the normative part of the claim is something about the local map rather than the territory or the global map. I think this has two parts:
1) When I say “X is the best bet” I meant that my subjective probability P(X is best) > P(any specific other reference member). I’m not actually betting it against “the field” or claiming P(X is best) > 0.5!
2) If I believe that X is the best bet in the sense of highest probability, of course if I was smarter and/or had more information my assigned probabilities will likely change.
I think the main problem with your definition is that it doesn’t allow you to be wrong. If you say “X is the best bet”, then how can I disagree if you’re accurately reporting information about your subjective credences? Of course, I could respond by saying “Y is the best bet”, but that’s just me reporting my credences back to you. And maybe we’ll change our credences, but at no point in time was either of us wrong, because we weren’t actually talking about the world itself.
Which seems odd, and out of line with how we use this type of language in other contexts. If I say “Mathematics is the best field to study, ex ante” then it seems like I’m making a claim not just about my own beliefs, but also about what can be reasonably inferred from other knowledge that’s available; a claim which might be wrong. In order to use this interpretation, we do need some sort of implicit notion of what knowledge is available, and what can be reasonably inferred from it, but that saves us from making claims that are only about our own beliefs. (In other words, not the local map, nor the territory, but some sort of intermediate “things that we should be able to infer from human knowledge” map.)
No, I’m using the common language meaning. Put it this way: there are seven billion people in the world, and only one of them is the best person to fund (ex ante). If you pick one person, and say “I believe that this is the BEST person to fund, given the information available in 2021”, then there’s a very high chance that you’re wrong, and so this claim isn’t justified. Whereas you can justifiably claim that this person is a very good person to fund.
I guess when I think “best action to do” the normative part of the claim is something about the local map rather than the territory or the global map. I think this has two parts:
1) When I say “X is the best bet” I meant that my subjective probability P(X is best) > P(any specific other reference member). I’m not actually betting it against “the field” or claiming P(X is best) > 0.5!
2) If I believe that X is the best bet in the sense of highest probability, of course if I was smarter and/or had more information my assigned probabilities will likely change.
I think the main problem with your definition is that it doesn’t allow you to be wrong. If you say “X is the best bet”, then how can I disagree if you’re accurately reporting information about your subjective credences? Of course, I could respond by saying “Y is the best bet”, but that’s just me reporting my credences back to you. And maybe we’ll change our credences, but at no point in time was either of us wrong, because we weren’t actually talking about the world itself.
Which seems odd, and out of line with how we use this type of language in other contexts. If I say “Mathematics is the best field to study, ex ante” then it seems like I’m making a claim not just about my own beliefs, but also about what can be reasonably inferred from other knowledge that’s available; a claim which might be wrong. In order to use this interpretation, we do need some sort of implicit notion of what knowledge is available, and what can be reasonably inferred from it, but that saves us from making claims that are only about our own beliefs. (In other words, not the local map, nor the territory, but some sort of intermediate “things that we should be able to infer from human knowledge” map.)