2. Credence in X vs credence I should roughly act on X
That all seems fine, until you start to multiply it out. 70%^20 is 0.08%. And yet my actual confidence in the basic EA framework is probably closer to 50%.
I think maybe you mean, or what you should mean, is that your actual confidence that you should, all things considered, act roughly as if the EA framework is correct is probably closer to 50%. And that’s what’s decision relevant. This captures ideas like those you mention, e.g.:
Well, what’s the actual alternative? Maybe they’re even less likely to be true?
Maybe this is unlikely to be true but really important if true, so I should make a “wager” on it for EV reasons?
I think a 50% confidence that the basic EA framework is actually correct[2]seems much too high, given how uncertain we should be about metaethics, consequentialism, axiology, decision theory, etc. But that uncertainty doesn’t mean acting on any other basis actually seems better. And it doesn’t even necessarily mean I should focus on reducing those uncertainties, for reasons including that I think I’m a better fit for reducing other uncertainties that are also very decision-relevant (e.g., whether people should focus on longtermism or other EA cause areas, or how to prioritise within longtermism).
So I think I’m much less than 50% certain that the basic EA framework is actually correct, but also that I should basically act according to the basic EA framework, and that I’ll continue doing so for the rest of my life, and that I shouldn’t be constantly stressing out about my uncertainty. (It’s possible that some people would find it harder to put the uncertainty out of mind, even when they think that, rationally speaking, they should do so. For those people, this sort of exercise might be counterproductive.)
[2] By the framework being “actually correct”, I don’t just mean “this framework is useful” or “this framework is the best we’ve got, given our current knowledge”. I mean something like “the claims it is based on are correct, or other claims that justify it are correct”, or “maximally knowledgeable and wise versions of ourselves would endorse this framework as correct or as worth acting on”.
2. Credence in X vs credence I should roughly act on X
I think maybe you mean, or what you should mean, is that your actual confidence that you should, all things considered, act roughly as if the EA framework is correct is probably closer to 50%. And that’s what’s decision relevant. This captures ideas like those you mention, e.g.:
Well, what’s the actual alternative? Maybe they’re even less likely to be true?
Maybe this is unlikely to be true but really important if true, so I should make a “wager” on it for EV reasons?
I think a 50% confidence that the basic EA framework is actually correct[2] seems much too high, given how uncertain we should be about metaethics, consequentialism, axiology, decision theory, etc. But that uncertainty doesn’t mean acting on any other basis actually seems better. And it doesn’t even necessarily mean I should focus on reducing those uncertainties, for reasons including that I think I’m a better fit for reducing other uncertainties that are also very decision-relevant (e.g., whether people should focus on longtermism or other EA cause areas, or how to prioritise within longtermism).
So I think I’m much less than 50% certain that the basic EA framework is actually correct, but also that I should basically act according to the basic EA framework, and that I’ll continue doing so for the rest of my life, and that I shouldn’t be constantly stressing out about my uncertainty. (It’s possible that some people would find it harder to put the uncertainty out of mind, even when they think that, rationally speaking, they should do so. For those people, this sort of exercise might be counterproductive.)
[2] By the framework being “actually correct”, I don’t just mean “this framework is useful” or “this framework is the best we’ve got, given our current knowledge”. I mean something like “the claims it is based on are correct, or other claims that justify it are correct”, or “maximally knowledgeable and wise versions of ourselves would endorse this framework as correct or as worth acting on”.