This is super interesting. Thanks for writing it. Do you think you’re conflating several analytically distinct phenomena when you say (i) “Fanaticism is the idea that we should base our decisions on all of the possible outcomes of our actions no matter how unlikely they are … base our decisions on all of the possible outcomes of our actions no matter how unlikely they are EA fanatics take a roughly maximize expected utility approach” and (ii) “Fanaticism is unreasonable”?
For (i), I mainly have in mind two approaches “fanatics” could be defined by: (ia) “do a quick back-of-the-envelope calculation of expected utility and form beliefs based solely on its output,” and (ib) “do what you actually think maximizes expected utility, no matter whether that’s based on a spreadsheet, heuristic, intuition, etc.” I think (ia) isn’t something basically anyone would defend, while (ib) is something I and many others would (and it’s how I think “fanaticism” tends to be used). And for (ib), we need to account for heuristics like, (f) quick BOTE calculations tend to overestimate the expected utility of low probabilities of high impact, and (g) extremely large and extremely small numbers should be sandboxed (e.g., capped in the influence they can have on the conclusion). This is a (large) downside of these “very weird projects,” and I think it makes the “should support” case a lot weaker.
For (ii), I mainly have in mind three claims about fanaticism: (iia) “Fanaticism is unintuitive,” (iib) “Fanaticism is absurd (a la reductio ad absurdum,” and (iic) “Fanaticism breaks some utility axioms.” These each have different evidence . For example, (iia) might not really matter if we don’t think our intuitions—which have been trained through evolution and life experience—are reliable for such unusual questions like maximizing long-run aggregate utility.
Did you have some of these in mind? Or maybe other operationalizations?
I meant to suggest that our all-things-considered assignments of probability and value should support projects like the ones I laid out. Those assignments might include napkin calculations, but if we know we overestimate those, we should adjust accordingly.
(g) extremely large and extremely small numbers should be sandboxed (e.g., capped in the influence they can have on the conclusion)
This sounds to me like it is in line with my takeaways. Perhaps we differ on the grounds for sandboxing? Expected value calculations don’t involve capping influence of component hypotheses. Do you have a take on how you would defend that?
or (ii), I mainly have in mind three claims about fanaticism: (iia) “Fanaticism is unintuitive,” (iib) “Fanaticism is absurd (a la reductio ad absurdum,” and (iic) “Fanaticism breaks some utility axioms.”
I don’t mean to say that fanaticism is wrong. So please don’t read this as a reductio. Interpreted as a claim about rationality, I largely am inclined to agree with it. What I would disagree with is a normative inference from its rationality to how we should act. Let’s not focus less on animal welfare or global poverty because of farfetched high-value possibilities, even if it would be rational to do so.
This is super interesting. Thanks for writing it. Do you think you’re conflating several analytically distinct phenomena when you say (i) “Fanaticism is the idea that we should base our decisions on all of the possible outcomes of our actions no matter how unlikely they are … base our decisions on all of the possible outcomes of our actions no matter how unlikely they are EA fanatics take a roughly maximize expected utility approach” and (ii) “Fanaticism is unreasonable”?
For (i), I mainly have in mind two approaches “fanatics” could be defined by: (ia) “do a quick back-of-the-envelope calculation of expected utility and form beliefs based solely on its output,” and (ib) “do what you actually think maximizes expected utility, no matter whether that’s based on a spreadsheet, heuristic, intuition, etc.” I think (ia) isn’t something basically anyone would defend, while (ib) is something I and many others would (and it’s how I think “fanaticism” tends to be used). And for (ib), we need to account for heuristics like, (f) quick BOTE calculations tend to overestimate the expected utility of low probabilities of high impact, and (g) extremely large and extremely small numbers should be sandboxed (e.g., capped in the influence they can have on the conclusion). This is a (large) downside of these “very weird projects,” and I think it makes the “should support” case a lot weaker.
For (ii), I mainly have in mind three claims about fanaticism: (iia) “Fanaticism is unintuitive,” (iib) “Fanaticism is absurd (a la reductio ad absurdum,” and (iic) “Fanaticism breaks some utility axioms.” These each have different evidence . For example, (iia) might not really matter if we don’t think our intuitions—which have been trained through evolution and life experience—are reliable for such unusual questions like maximizing long-run aggregate utility.
Did you have some of these in mind? Or maybe other operationalizations?
I meant to suggest that our all-things-considered assignments of probability and value should support projects like the ones I laid out. Those assignments might include napkin calculations, but if we know we overestimate those, we should adjust accordingly.
This sounds to me like it is in line with my takeaways. Perhaps we differ on the grounds for sandboxing? Expected value calculations don’t involve capping influence of component hypotheses. Do you have a take on how you would defend that?
I don’t mean to say that fanaticism is wrong. So please don’t read this as a reductio. Interpreted as a claim about rationality, I largely am inclined to agree with it. What I would disagree with is a normative inference from its rationality to how we should act. Let’s not focus less on animal welfare or global poverty because of farfetched high-value possibilities, even if it would be rational to do so.