RSS

Fanaticism

TagLast edit: Jan 18, 2022, 5:40 PM by Leo

Fanaticism is the apparent problem faced by moral theories that rank a minuscule probability of an arbitrarily large value above a guaranteed modest amount of value.[1][2] Some have argued that fanatical theories should be rejected and that this might undermine the case for certain philosophical positions, such as longtermism.

See also Pascal’s mugging.

Further reading

Beckstead, Nick & Teruji Thomas (2021) A paradox for tiny probabilities and enormous values, Global Priorities Institute.

Wilkinson, Hayden (2022) In defense of fanaticism, Ethics, vol. 132, pp. 445–477.

Wiblin, Robert & Keiran Harris (2021) Christian Tarsney on future bias and a possible solution to moral fanaticism, 80,000 Hours, May 5.

Related entries

alternatives to expected value theory | altruistic wager | decision theory | decision-theoretic uncertainty | expected value | moral uncertainty | naive vs. sophisticated consequentialism | risk aversion

  1. ^

    Wilkinson, Hayden (2022) In defense of fanaticism, Ethics, vol. 132, pp. 445–477.

  2. ^

    Tarsney, Christian (2020) The epistemic challenge to longtermism, Global Priorities Institute, section 6.2.

Ex­pected value the­ory is fa­nat­i­cal, but that’s a good thing

HaydenWSep 21, 2020, 8:48 AM
57 points
20 comments5 min readEA link

Bet­ter differ­ence-mak­ing views

MichaelStJulesDec 21, 2024, 6:27 PM
31 points
0 comments14 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: De­fus­ing the Ini­tial Worry and Steel­man­ning the Problem

kokotajlodNov 10, 2018, 9:12 AM
35 points
10 comments8 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Solutions

kokotajlodNov 14, 2018, 4:04 PM
22 points
5 comments6 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: A Prob­lem for Long-Ter­mism?

kokotajlodNov 8, 2018, 10:09 AM
30 points
18 comments7 min readEA link

Me­taeth­i­cal Fa­nat­i­cism (Dialogue)

Lukas_GloorJun 17, 2020, 12:33 PM
35 points
10 comments15 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Con­clud­ing Arguments

kokotajlodNov 15, 2018, 9:47 PM
33 points
6 comments10 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

PabloMay 6, 2021, 10:39 AM
26 points
6 comments1 min readEA link
(80000hours.org)

EA is about max­i­miza­tion, and max­i­miza­tion is perilous

Holden KarnofskySep 2, 2022, 5:13 PM
494 points
59 comments7 min readEA link

The case for strong longter­mism—June 2021 update

JackMJun 21, 2021, 9:30 PM
64 points
12 comments3 min readEA link
(globalprioritiesinstitute.org)

Fa­nat­i­cal EAs should sup­port very weird projects

Derek ShillerJun 30, 2022, 12:07 PM
66 points
42 comments9 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA🔸Apr 4, 2021, 3:09 AM
79 points
27 comments2 min readEA link
(globalprioritiesinstitute.org)

A dilemma for Max­i­mize Ex­pected Choice­wor­thi­ness (MEC)

Calvin_BakerSep 1, 2022, 2:46 PM
33 points
7 comments10 min readEA link

Sum­mary: Against Anti-Fa­nat­i­cism (Chris­tian Tarsney)

Nic Kruus🔸Jan 25, 2024, 3:04 PM
25 points
3 comments3 min readEA link

Ar­gu­ments for util­i­tar­i­anism are im­pos­si­bil­ity ar­gu­ments un­der un­bounded prospects

MichaelStJulesOct 7, 2023, 9:09 PM
39 points
48 comments1 min readEA link

The Epistemic Challenge to Longtermism

Global Priorities InstituteApr 30, 2020, 1:38 PM
5 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.hFeb 21, 2021, 1:41 PM
16 points
1 comment7 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Bibliog­ra­phy and Appendix

kokotajlodNov 20, 2018, 5:34 PM
10 points
0 comments24 min readEA link

Fa­nat­i­cism in AI: SERI Project

Jake Arft-GuatelliSep 24, 2021, 4:39 AM
7 points
2 comments5 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

JackMMar 9, 2021, 5:58 PM
90 points
43 comments19 min readEA link

Ajeya Co­tra on wor­ld­view di­ver­sifi­ca­tion and how big the fu­ture could be

80000_HoursJan 18, 2021, 8:35 AM
14 points
1 comment125 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

80000_HoursMay 5, 2021, 7:38 PM
7 points
0 comments113 min readEA link

A full syl­labus on longtermism

jtmMar 5, 2021, 10:57 PM
110 points
13 comments8 min readEA link

New ar­ti­cle from Oren Etzioni

Aryeh EnglanderFeb 25, 2020, 3:38 PM
23 points
3 comments2 min readEA link

[Question] Pas­cal’s Mug­ging and aban­don­ing credences

AndreaSRJul 9, 2021, 10:18 AM
6 points
9 comments1 min readEA link

Egyp­tol­ogy and Fa­nat­i­cism (Hay­den Wilk­in­son)

Global Priorities InstituteOct 5, 2023, 6:32 AM
19 points
0 comments2 min readEA link

Re­solv­ing moral un­cer­tainty with randomization

Bob Jacobs 🔸Mar 29, 2023, 10:10 AM
29 points
3 comments10 min readEA link

Sum­mary: In Defence of Fa­nat­i­cism (Hay­den Wilk­in­son)

Nic Kruus🔸Jan 15, 2024, 2:21 PM
30 points
3 comments6 min readEA link

Sum­mary: In defence of fanaticism

Global Priorities InstituteMay 9, 2024, 3:09 PM
29 points
1 comment6 min readEA link

The scale of an­i­mal agriculture

MichaelStJulesMay 16, 2024, 4:01 AM
49 points
4 comments3 min readEA link

[Question] What rea­son is there NOT to ac­cept Pas­cal’s Wager?

Transient AltruistAug 4, 2022, 2:29 PM
31 points
82 comments1 min readEA link

What is so wrong with the “dog­matic” solu­tion to reck­less­ness?

tobycrisford 🔸Feb 11, 2023, 6:29 PM
25 points
31 comments7 min readEA link

[Question] Mo­ral dilemma

TormentedSep 4, 2021, 6:10 PM
5 points
17 comments1 min readEA link

A Para­dox for Tiny Prob­a­bil­ities and Enor­mous Values

Global Priorities InstituteJul 1, 2021, 7:00 AM
5 points
1 comment1 min readEA link
(globalprioritiesinstitute.org)

In defence of fanaticism

Global Priorities InstituteJan 1, 2021, 2:33 PM
13 points
0 comments6 min readEA link
(globalprioritiesinstitute.org)

EV Max­i­miza­tion for Humans

SharmakeSep 3, 2022, 11:44 PM
12 points
0 comments4 min readEA link

Fu­ture peo­ple might not ex­ist

Indra Gesink 🔸Nov 30, 2022, 7:17 PM
18 points
0 comments4 min readEA link

New Global Pri­ori­ties In­sti­tute work­ing pa­pers—and an up­dated ver­sion of “The case for strong longter­mism”

Global Priorities InstituteAug 9, 2021, 4:57 PM
46 points
0 comments2 min readEA link

The de­mand­ing­ness of the future

anormativeMar 13, 2024, 1:02 PM
6 points
0 comments2 min readEA link