I was puzzled!
Demosthenes
I think most philosophers would say that evidence and reason are different because even if practical rationality requires that you maximize expected utility in one way or another—just as theoretical rationality requires that you conditionalize on your evidence—neither thing tells you that MORE evidence is better. You can be a perfectly rational, perfectly ignorant agent. That more evidence is better than less is a separate kind of epistemological principle than the one that tells you to conditionalize on whatever you’ve managed to get.(1)
Another way to put it: more evidence is better from a first-person point of view: if you can get more evidence before you decide to act, you should do it! But from the third person point of view, you shouldn’t criticize people who maximize expected utility on the basis of bad or scarce evidence.
Here’s a quote from James Joyce (a causal decision theorist):
“CDT [causal decision theory] is committed to two principles that jointly entail that initial opinions should fix actions most of the time, but not [always]...
CURRENT EVALUATION. If Prob_t(.) characterizes your beliefs at t, then at t you should evaluate each act by its causal expected utility using Prob_t(.).
FULL INFORMATION. You should act on your time-t utility assessment only if those assessments are based on beliefs that incorporate all the evidence that is both freely available to you at t and relevant to the question about what your acts are likely to cause.” (Joyce, “Regret and Instability in Causal Decision Theory,” 126-127)
...only the first principle is determined by the utility-maximizing equation that’s at the mathematical core of causal decision theory. Anyway, that’s my nerdy philosophical lit contribution to the issue ;).
(1) In an extreme case, suppose you have NO evidence—you are in the “a priori position” mentioned by RyanCarey. Then reason is like an empty stomach, with no evidence to digest. But still it would contribute the tautologies of pure logic—those are propositions that are true no matter what you conditionalize on, indeed whether you conditionalize on anything at all.
I had the intuitions you were looking for at first...but I’m not sure they withstood reflection! In the basic case, I have $100 which I can (i) keep for myself, (ii) donate to an ineffective charity, or (iii) donate to an effective charity. Surely (iii) is better than (ii) which is better than (i), but (ii) is impermissible while (i) isn’t...likewise for (i) staying where I am, (ii) saving the one, and (iii) saving the 100. In general, it’s just odd when permissibility doesn’t supervene on goodness in this way.
The Kagan case looks at first like it has the same structure, but the intuitions there seem to depend at least in part on knowledge, as well as cost: the reason it’s permissible not to enter the building is that you don’t know there’s a child in there. Once you do know, perhaps saving the child is morally required, after all. So it’s not as clear that you have the same structure of (i)-(iii) as in your lakes case.
I’m not sure your #1 is really an instance of the conjunction fallacy (which is having a higher credence in a conjunction—BANK TELLER & FEMINIST—than in a single conjunct—BANK TELLER...). I might call it the “Outsourcing Fallacy”: the belief that it’s always better go meta and get someone else to do first-order work (here, donation). Obviously that’s not true, though: if it costs me $5 to get each of two people to donate $1, I should have just avoided the exercise and gone first-order.
There are well-understood explanations for when and why people fall victim to the conjunction fallacy. Why do people engage in the outsourcing fallacy? A simple answer: doing so gives me evidence that my influence over other is greater than it is, which is good for my ego?