I think most philosophers would say that evidence and reason are different because even if practical rationality requires that you maximize expected utility in one way or another—just as theoretical rationality requires that you conditionalize on your evidence—neither thing tells you that MORE evidence is better. You can be a perfectly rational, perfectly ignorant agent. That more evidence is better than less is a separate kind of epistemological principle than the one that tells you to conditionalize on whatever you’ve managed to get.(1)
Another way to put it: more evidence is better from a first-person point of view: if you can get more evidence before you decide to act, you should do it! But from the third person point of view, you shouldn’t criticize people who maximize expected utility on the basis of bad or scarce evidence.
Here’s a quote from James Joyce (a causal decision theorist):
“CDT [causal decision theory] is committed to two principles that jointly entail that initial opinions should fix actions most of the time, but not [always]...
CURRENT EVALUATION. If Prob_t(.) characterizes your beliefs at t, then at t you should evaluate each act by its causal expected utility using Prob_t(.).
FULL INFORMATION. You should act on your time-t utility assessment only if those assessments are based on beliefs that incorporate all the evidence that is both freely available to you at t and relevant to the question about what your acts are likely to cause.”
(Joyce, “Regret and Instability in Causal Decision Theory,” 126-127)
...only the first principle is determined by the utility-maximizing equation that’s at the mathematical core of causal decision theory. Anyway, that’s my nerdy philosophical lit contribution to the issue ;).
(1) In an extreme case, suppose you have NO evidence—you are in the “a priori position” mentioned by RyanCarey. Then reason is like an empty stomach, with no evidence to digest. But still it would contribute the tautologies of pure logic—those are propositions that are true no matter what you conditionalize on, indeed whether you conditionalize on anything at all.
I think most philosophers would say that evidence and reason are different because even if practical rationality requires that you maximize expected utility in one way or another—just as theoretical rationality requires that you conditionalize on your evidence—neither thing tells you that MORE evidence is better. You can be a perfectly rational, perfectly ignorant agent. That more evidence is better than less is a separate kind of epistemological principle than the one that tells you to conditionalize on whatever you’ve managed to get.(1)
Another way to put it: more evidence is better from a first-person point of view: if you can get more evidence before you decide to act, you should do it! But from the third person point of view, you shouldn’t criticize people who maximize expected utility on the basis of bad or scarce evidence.
Here’s a quote from James Joyce (a causal decision theorist):
“CDT [causal decision theory] is committed to two principles that jointly entail that initial opinions should fix actions most of the time, but not [always]...
CURRENT EVALUATION. If Prob_t(.) characterizes your beliefs at t, then at t you should evaluate each act by its causal expected utility using Prob_t(.).
FULL INFORMATION. You should act on your time-t utility assessment only if those assessments are based on beliefs that incorporate all the evidence that is both freely available to you at t and relevant to the question about what your acts are likely to cause.” (Joyce, “Regret and Instability in Causal Decision Theory,” 126-127)
...only the first principle is determined by the utility-maximizing equation that’s at the mathematical core of causal decision theory. Anyway, that’s my nerdy philosophical lit contribution to the issue ;).
(1) In an extreme case, suppose you have NO evidence—you are in the “a priori position” mentioned by RyanCarey. Then reason is like an empty stomach, with no evidence to digest. But still it would contribute the tautologies of pure logic—those are propositions that are true no matter what you conditionalize on, indeed whether you conditionalize on anything at all.