General lesson: if we don’t have any good way of dealing with imprecise credences, we probably shouldn’t regard them as rationally mandatory.
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?
I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?