2. Credence in X vs credence I should roughly act on X
That all seems fine, until you start to multiply it out. 70%^20 is 0.08%. And yet my actual confidence in the basic EA framework is probably closer to 50%.
I think maybe you mean, or what you should mean, is that your actual confidence that you should, all things considered, act roughly as if the EA framework is correct is probably closer to 50%. And thatâs whatâs decision relevant. This captures ideas like those you mention, e.g.:
Well, whatâs the actual alternative? Maybe theyâre even less likely to be true?
Maybe this is unlikely to be true but really important if true, so I should make a âwagerâ on it for EV reasons?
I think a 50% confidence that the basic EA framework is actually correct[2]seems much too high, given how uncertain we should be about metaethics, consequentialism, axiology, decision theory, etc. But that uncertainty doesnât mean acting on any other basis actually seems better. And it doesnât even necessarily mean I should focus on reducing those uncertainties, for reasons including that I think Iâm a better fit for reducing other uncertainties that are also very decision-relevant (e.g., whether people should focus on longtermism or other EA cause areas, or how to prioritise within longtermism).
So I think Iâm much less than 50% certain that the basic EA framework is actually correct, but also that I should basically act according to the basic EA framework, and that Iâll continue doing so for the rest of my life, and that I shouldnât be constantly stressing out about my uncertainty. (Itâs possible that some people would find it harder to put the uncertainty out of mind, even when they think that, rationally speaking, they should do so. For those people, this sort of exercise might be counterproductive.)
[2] By the framework being âactually correctâ, I donât just mean âthis framework is usefulâ or âthis framework is the best weâve got, given our current knowledgeâ. I mean something like âthe claims it is based on are correct, or other claims that justify it are correctâ, or âmaximally knowledgeable and wise versions of ourselves would endorse this framework as correct or as worth acting onâ.
2. Credence in X vs credence I should roughly act on X
I think maybe you mean, or what you should mean, is that your actual confidence that you should, all things considered, act roughly as if the EA framework is correct is probably closer to 50%. And thatâs whatâs decision relevant. This captures ideas like those you mention, e.g.:
Well, whatâs the actual alternative? Maybe theyâre even less likely to be true?
Maybe this is unlikely to be true but really important if true, so I should make a âwagerâ on it for EV reasons?
I think a 50% confidence that the basic EA framework is actually correct[2] seems much too high, given how uncertain we should be about metaethics, consequentialism, axiology, decision theory, etc. But that uncertainty doesnât mean acting on any other basis actually seems better. And it doesnât even necessarily mean I should focus on reducing those uncertainties, for reasons including that I think Iâm a better fit for reducing other uncertainties that are also very decision-relevant (e.g., whether people should focus on longtermism or other EA cause areas, or how to prioritise within longtermism).
So I think Iâm much less than 50% certain that the basic EA framework is actually correct, but also that I should basically act according to the basic EA framework, and that Iâll continue doing so for the rest of my life, and that I shouldnât be constantly stressing out about my uncertainty. (Itâs possible that some people would find it harder to put the uncertainty out of mind, even when they think that, rationally speaking, they should do so. For those people, this sort of exercise might be counterproductive.)
[2] By the framework being âactually correctâ, I donât just mean âthis framework is usefulâ or âthis framework is the best weâve got, given our current knowledgeâ. I mean something like âthe claims it is based on are correct, or other claims that justify it are correctâ, or âmaximally knowledgeable and wise versions of ourselves would endorse this framework as correct or as worth acting onâ.