RSS

Fanaticism

TagLast edit: 18 Jan 2022 17:40 UTC by Leo

Fanaticism is the apparent problem faced by moral theories that rank a minuscule probability of an arbitrarily large value above a guaranteed modest amount of value.[1][2] Some have argued that fanatical theories should be rejected and that this might undermine the case for certain philosophical positions, such as longtermism.

See also Pascal’s mugging.

Further reading

Beckstead, Nick & Teruji Thomas (2021) A paradox for tiny probabilities and enormous values, Global Priorities Institute.

Wilkinson, Hayden (2022) In defense of fanaticism, Ethics, vol. 132, pp. 445–477.

Wiblin, Robert & Keiran Harris (2021) Christian Tarsney on future bias and a possible solution to moral fanaticism, 80,000 Hours, May 5.

Related entries

alternatives to expected value theory | altruistic wager | decision theory | decision-theoretic uncertainty | expected value | moral uncertainty | naive vs. sophisticated consequentialism | risk aversion

  1. ^

    Wilkinson, Hayden (2022) In defense of fanaticism, Ethics, vol. 132, pp. 445–477.

  2. ^

    Tarsney, Christian (2020) The epistemic challenge to longtermism, Global Priorities Institute, section 6.2.

Ex­pected value the­ory is fa­nat­i­cal, but that’s a good thing

HaydenW21 Sep 2020 8:48 UTC
57 points
20 comments5 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Solutions

kokotajlod14 Nov 2018 16:04 UTC
22 points
5 comments6 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: A Prob­lem for Long-Ter­mism?

kokotajlod8 Nov 2018 10:09 UTC
30 points
18 comments7 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Con­clud­ing Arguments

kokotajlod15 Nov 2018 21:47 UTC
33 points
6 comments10 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: De­fus­ing the Ini­tial Worry and Steel­man­ning the Problem

kokotajlod10 Nov 2018 9:12 UTC
35 points
10 comments8 min readEA link

Me­taeth­i­cal Fa­nat­i­cism (Dialogue)

Lukas_Gloor17 Jun 2020 12:33 UTC
35 points
10 comments15 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

Pablo6 May 2021 10:39 UTC
26 points
6 comments1 min readEA link
(80000hours.org)

EA is about max­i­miza­tion, and max­i­miza­tion is perilous

Holden Karnofsky2 Sep 2022 17:13 UTC
491 points
59 comments7 min readEA link

The case for strong longter­mism—June 2021 update

JackM21 Jun 2021 21:30 UTC
64 points
12 comments3 min readEA link
(globalprioritiesinstitute.org)

Sum­mary: Against Anti-Fa­nat­i­cism (Chris­tian Tarsney)

Nic Kruus🔸25 Jan 2024 15:04 UTC
25 points
3 comments3 min readEA link

A dilemma for Max­i­mize Ex­pected Choice­wor­thi­ness (MEC)

Calvin_Baker1 Sep 2022 14:46 UTC
33 points
7 comments10 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA🔸4 Apr 2021 3:09 UTC
79 points
27 comments2 min readEA link
(globalprioritiesinstitute.org)

Fa­nat­i­cal EAs should sup­port very weird projects

Derek Shiller30 Jun 2022 12:07 UTC
66 points
42 comments9 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

80000_Hours5 May 2021 19:38 UTC
7 points
0 comments113 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

JackM9 Mar 2021 17:58 UTC
90 points
43 comments19 min readEA link

A full syl­labus on longtermism

jtm5 Mar 2021 22:57 UTC
110 points
13 comments8 min readEA link

Fa­nat­i­cism in AI: SERI Project

Jake Arft-Guatelli24 Sep 2021 4:39 UTC
7 points
2 comments5 min readEA link

The Epistemic Challenge to Longtermism

Global Priorities Institute30 Apr 2020 13:38 UTC
5 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

The scale of an­i­mal agriculture

MichaelStJules16 May 2024 4:01 UTC
49 points
4 comments3 min readEA link

New ar­ti­cle from Oren Etzioni

Aryeh Englander25 Feb 2020 15:38 UTC
23 points
3 comments2 min readEA link

[Question] What rea­son is there NOT to ac­cept Pas­cal’s Wager?

Transient Altruist4 Aug 2022 14:29 UTC
31 points
82 comments1 min readEA link

[Question] Pas­cal’s Mug­ging and aban­don­ing credences

AndreaSR9 Jul 2021 10:18 UTC
6 points
9 comments1 min readEA link

Egyp­tol­ogy and Fa­nat­i­cism (Hay­den Wilk­in­son)

Global Priorities Institute5 Oct 2023 6:32 UTC
19 points
0 comments2 min readEA link

Re­solv­ing moral un­cer­tainty with randomization

Bob Jacobs 🔸29 Mar 2023 10:10 UTC
29 points
3 comments10 min readEA link

Sum­mary: In Defence of Fa­nat­i­cism (Hay­den Wilk­in­son)

Nic Kruus🔸15 Jan 2024 14:21 UTC
30 points
3 comments6 min readEA link

Ajeya Co­tra on wor­ld­view di­ver­sifi­ca­tion and how big the fu­ture could be

80000_Hours18 Jan 2021 8:35 UTC
14 points
1 comment125 min readEA link

Sum­mary: In defence of fanaticism

Global Priorities Institute9 May 2024 15:09 UTC
29 points
1 comment6 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Bibliog­ra­phy and Appendix

kokotajlod20 Nov 2018 17:34 UTC
10 points
0 comments24 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

The de­mand­ing­ness of the future

anormative13 Mar 2024 13:02 UTC
6 points
0 comments2 min readEA link

New Global Pri­ori­ties In­sti­tute work­ing pa­pers—and an up­dated ver­sion of “The case for strong longter­mism”

Global Priorities Institute9 Aug 2021 16:57 UTC
46 points
0 comments2 min readEA link

In defence of fanaticism

Global Priorities Institute1 Jan 2021 14:33 UTC
13 points
0 comments6 min readEA link
(globalprioritiesinstitute.org)

Fu­ture peo­ple might not ex­ist

Indra Gesink 🔸30 Nov 2022 19:17 UTC
18 points
0 comments4 min readEA link

[Question] Mo­ral dilemma

Tormented4 Sep 2021 18:10 UTC
5 points
17 comments1 min readEA link

A Para­dox for Tiny Prob­a­bil­ities and Enor­mous Values

Global Priorities Institute1 Jul 2021 7:00 UTC
5 points
1 comment1 min readEA link
(globalprioritiesinstitute.org)

What is so wrong with the “dog­matic” solu­tion to reck­less­ness?

tobycrisford 🔸11 Feb 2023 18:29 UTC
25 points
31 comments7 min readEA link

EV Max­i­miza­tion for Humans

Sharmake3 Sep 2022 23:44 UTC
12 points
0 comments4 min readEA link